While a lot of organizations utilize 3rd-party APIs and services to build generative AI-powered applications, we are seeing more teams set up their own self-hosted large language models (LLMs). In this session, we will discuss various best practices when building and managing self-hosted LLMs.
One of the practical techniques to secure self-hosted LLMs involves building a vulnerability scanner that checks for vulnerabilities such as prompt injection. In this session, we will discuss how to build a custom scanner to help teams identify security issues specific to their self-hosted LLMs.
More companies globally have started to utilize Retrieval Augmented Generation (RAG) to significantly enhance their AI-powered applications. In this session, we will take a closer look at how RAG works, and we will demonstrate how to implement a RAG-powered chatbot using Python.
In this talk, we will discuss several practical strategies for optimizing incident response workflows and leveraging AI-powered solutions for intelligent decision-making. We'll dive deep into how each of these strategies and solutions can ultimately build resilience in incident management processes.
Learn for free, join the best tech learning community
Event notifications, weekly newsletter
Access to all content