As amazing as state-of-the-art machine learning models are, training, optimizing, and deploying them remains a challenging endeavor that requires a significant amount of time, resources, and skills, all the more when different languages are involved. Unfortunately, this complexity prevents most organizations from using these models effectively, if at all. Instead, wouldn’t it be great if we could just start from pre-trained versions and put them to work immediately?
This is the exact challenge that Hugging Face is tackling. Founded in 2016, this startup based in New York and Paris makes it easy to add state-of-the-art Transformer models to your applications. Thanks to popular open-source libraries (transformers, tokenizers, and datasets libraries, developers can easily work with over 2,900 datasets and over 29,000 pre-trained models in 160+ languages. In fact, with close to 60,000 stars on GitHub and 1 million downloads per month, the transformers library has become the de-facto place for developers and data scientists to find state-of-the-art models for natural language processing, computer vision, and audio.
In this session, we’ll introduce you to Transformer models and what business problems you can solve with them. Then, we’ll show you how you can simplify and accelerate your machine learning projects end-to-end: experimenting, training, optimizing, and deploying. Along the way, we’ll run some demos to keep things concrete and exciting!
Priority access to all content
Community Discord
Exclusive promotions and giveaways