Run LLM apps with Zero python dependency. Dive in to learn a strong alternative to Python in AI inference. Compared with Python, Rust+Wasm apps could be 1/100 of the size, 100x the speed, and securely run everywhere at full hardware acceleration without any change to the binary code.
Fast and Portable Llama2 Inference on the Heterogeneous Edge: An alternative to Python. Compared with Python, Rust+Wasm apps could be 1/100 of the size, 100 times the speed, and, most importantly, securely run everywhere at full hardware acceleration without any change to the binary code.
Priority access to all content
Video hallway track
Community chat
Exclusive promotions and giveaways