Conf42 Machine Learning 2024 - Online

- premiere 5PM GMT

Run LLMs across devices with a 2MB Inference App

Abstract

Run LLM apps with Zero python dependency. Dive in to learn a strong alternative to Python in AI inference. Compared with Python, Rust+Wasm apps could be 1/100 of the size, 100x the speed, and securely run everywhere at full hardware acceleration without any change to the binary code.

...

Michael Yuan

Co-founder @ Second State & WasmEdge

Michael Yuan's LinkedIn account Michael Yuan's twitter account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways