They generate an overwhelming amount of data that you have to ingest, transfer, prepare and store even before thinking of analyzing them or training a model. In this demo, we will explain how we architected and deployed a variety of data engineering patterns to create a full edge-to-core data & machine learning pipeline on OpenShift/Kubernetes for a smart-city use case.
We will walk you through the approach we took to deploy the ML model on K8s, moving data from edge-to-core using kafka, creation of data aggregation pipelines, and demo of real-time and batch analysis etc. Our overarching goal was to possess the ability to re-deploy the entire stack with a single command, for which we used Ansible and we lived happily thereafter. By the end of this session, you should get a better understanding on how to architect and develop data engineering workflows, and how to automate the deployment of the entire stack using Ansible.
Priority access to all content
Community Discord
Exclusive promotions and giveaways