Discover how fine-tuning poisoning can strip LLM safety measures without compromising performance. Dive into the BadGPT attack, a novel approach that bypasses guardrails, avoids token overhead, and retains model efficiency. Learn why securing LLMs is an ongoing challenge in AI alignment.
Dive into Python’s peculiar packaging woes, the breakthrough of a GIL-free future, and how AI is turbocharging CPython. This talk unpacks challenges, showcases innovations, and explores how Python is gearing up to stay the dominant programming language for decades to come.
Learn for free, join the best tech learning community
Event notifications, weekly newsletter
Access to all content