GenerativeAI is rapidly reshaping the tech landscape. Its implementation, especially in intricate scenarios, often involves a sophisticated dance among application code, users, and multiple Large Language Models (LLMs). The complexity of coordinating these elements extends beyond the capabilities of basic API calls. While some vendors offer means to access LLMs, they fall short in guiding...
Discover how Serverless transforms the GenAI landscape! Learn to seamlessly integrate LLMs into any application, overcoming complex, costly workflows. Our talk unveils efficient orchestration strategies for LLM calls, ensuring smooth, cost-effective interactions between your code, users, and AI.
In this talk a brief introduction to Rust language will be provided, with a focus on its parallel processing capabilities, then a sample machine learning inference project is presented which pulls records from AWS Kinesis Serverless to perform customer churn prediction, then save the result back in a queue. Rust capabilities in parallel data processing and fast runtime execution are compared to...
Priority access to all content
Video hallway track
Exclusive promotions and giveaways