Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
My name is Karthik Mohan.
You can call me Karthik.
I'm a software engineer with eight years of experience specializing in
building and integrating complex systems.
My work is focused on the Salesforce platform.
And cloud data solutions like Google BigQuery with an expertise in
crafting everything from the intuitive user interface down to the robust
backend architecture that powers it.
Today I want to talk about a fundamental shift happening in
how we build enterprise systems.
We are moving away from a world where everything happens in the cloud towards
a more balanced and hybrid approach.
My goal for this session is to explore how we can effectively
bridge the growing divide between edge computing and the cloud.
We'll look at two powerful orchestration methods.
Patterns that will solve real world problems and show how rust programming
language provides the perfect foundation for these modern realtime systems.
By the end, I hope you'll have a clear, practical roadmap for implementing
these ideas in your own organizations.
Okay, here's a quick look at our journey for the next few minutes.
First we'll set the stage by digging into the specific problems with traditional
cloud only architectures, especially when it comes to real time applications.
Next, we'll dive into the core of the presentation to transformative
orchestration patterns.
Those are.
The edge inference, cloud remediation pattern, and the cloud insight,
edge reconfiguration pattern.
Then we'll talk about the why.
Why is rust such a game changer for edge computing?
We'll touch on its key features like memory safety and what
fearless concurrency actually means for us as developers.
Of course every project has its own huddles.
We'll honestly discuss the big implementation challenges like managing
distributed state and security, and the solutions we found to be effective.
Now let's start with the core problem.
As we deploy more and more IOT devices in our factories our stores
or cities, we are pushing up against the natural limits of what a purely
cloud-centric model can handle.
This creates some challenges, the three major challenges.
The first one is latency, right?
So for many real time applications like a robot or a medical monitoring device, the
roundtrip time to a distance cloud server is simply too long, even though it might
be just a hundred to 200 milliseconds.
That is also too long to wait for a decision when we need a
decision to be made in, let's say, less than 20 milliseconds.
The second problem is, the bandwidth the sheer volume of data from thousands
of sensors can be overwhelming.
Trying to stream everything to the cloud 24 by seven creates a
massive network congestion and can lead to astronomical costs.
Finally, there's one more challenge.
That's the availability.
If your critical operations are entirely dependent on a stable internet connection
you are creating a huge point of failure.
Your edge systems needs to keep functioning even when the
cloud connectivity is lost.
So these challenges make it clear that the future isn't about choosing an
edge system or a cloud architecture.
It's about making them work together intelligently.
Okay, now let's talk about our first solution.
This is a pattern called edge inference Cloud remediation.
This pattern fundamentally flips the traditional data processing
model, and here's how it works.
Instead of sending the raw data to the cloud we perform the initial analysis
right where the data is created.
On the edge.
We deploy like lightweight, efficient machine learning models
onto the edge devices themselves.
These models can make immediate realtime decisions only the results
the important insights or anomalies or exceptions are sent to the cloud.
The cloud can then perform more sophisticated analysis and decide if a
larger remediation action is required.
Let's say dispatching a technician or triggering a workflow right?
The benefits of this is instantaneous.
Like you dramatically reduce the latency for critical decisions.
The bandwidth needs are reduced and a more reliable system is
built without crippling the without being crippled by network outages.
This type of pattern is ideal for industries such as manufacturing,
predictive maintenance, and essentially any scenario where local
realtime responsiveness is necessary.
Okay, now let's go to the next one.
Now the second, pattern.
This pattern solves a completely different kind of problem.
If the first pattern was about local autonomy.
This one it's called Cloud Insight.
Edge reconfiguration.
It's all about creating a system wide continuous learning loop.
So here's the flow.
The cloud analytics platform aggregates the data from your, entire fleet
of edge devices that gives, so this gives like a bird's eye view that
no single device possesses right.
From there machine learning models generates like needed insights and it also
identifies like optimization opportunities that are only visible and all the
data is present or visible at scale.
These insights are then translated into specific policy generations
new rules or configurations that are tailored for the individual
devices or the group of devices.
Finally, in the edge configuration step, these new policies are pushed
down to the devices which update their local operations on the flight.
Examples of this can be anything from adjusting a price in a retail store to
chaining the timing of a traffic light.
It's a powerful feedback loop when centralized intelligence
drives distributed execution.
Okay let's go to the next slide.
Okay.
Now let's talk about rust.
So Rust is the technology that makes these demanding patterns possible.
Now, this choice of language is also a strategic one because
there are many advantages of rust.
For example, rust provides memory safety without the
overhead of a garbage collector.
This is like the holy grail for embedded and IT systems.
It eliminates entire classes of bugs and security vulnerabilities at
compiling without the unpredictable performances Pauses that garbage
collect collection can introduce.
Second is fearless concurrency edge devices are always
juggling multiple tasks.
So rusts compiler guarantees that you don't have any data races which are
notoriously difficult bugs to track down.
This allows us to write highly parallel code with confidence.
Third rusts absolutely fantastic.
Cross platform compilation lets us write our logic.
Once and deployed across the diverse hardware landscape of the edge from
the tiny a RM based sensors to the powerful X 86 industrial computers.
Finally, rust has what is called zero cost abstractions, which is like a fancy
way of saying you can write high level expressive code without sacrificing the.
Bare metal performance which you'll expect from language like let's say C. Okay.
Let's go to the next slide and here are some numbers.
So this is not just theory.
There are some numbers that we can see here.
Rust usually uses 50 to 80% less.
Memory than their counterparts.
When the counterparts are returned in managed languages on a RM devices,
we see two to three times better performance per what, which is critical
for power constrained environments.
And it's also consistently
having an interference.
Jitter of sub millisecond, right?
This means our performance is predictable and reliable, which is exactly what
you need in real time applications.
So yeah.
Now that we've seen the patterns and what's the best language to use, let's
see, some implementation challenges.
Okay, now the first major challenge is managing state across a distributed system
with unreliable network connections.
How do you ensure that the data on the Edge device and the
data in the cloud stay in sync?
The best solution would be a hybrid consistency model.
Because we shouldn't use like a one size fits all approach, right?
For critical data like financial transactions.
We can enforce high immediate consistency, but for a less critical analytics
data, we can relax the constraints and use eventual consistency and to
resolve conflicts when they do occur.
We use, special data structures called CDTs that can automatically and gracefully
merge different versions of the data.
Let's go to the next slide.
Okay.
So the next big challenge is security edge devices often operate in a
physically insecure environment, we have to assume that they are v vulnerable.
It's better if the architecture is built on a zero trust model.
So which means every single request is authenticated and authorized
regardless of where it comes from.
The rusts compile time security needs to be leveraged to eliminate
entire classes of vulnerabilities before they ever reach production.
Then.
Fine grained capability based security model needs to be used to
ensure devices only have permission to do exactly what they need to do.
Finally we need to implement like a app machine learning based threat detection.
Let's go to the next slide.
And now our final challenge is squeezing out every last drop of performance.
And we can do this in three ways.
Like first we need to use advanced caching which might include the
predictive prefetching that anticipates like what data will be needed next.
So this alone would reduce the cloud query significantly.
Second, we need to use selective data compression, right?
So by using content aware algorithms instead of generic compression,
we can cut our bandwidth usage with minimal CPU overhead, right?
And finally, for performance critical hotspots, we need to write
workload specific optimizations.
Using low level SIMD instructions, which can in some cases increase
the data processing throughput by more than three times.
Okay,
let's go to the next slide now.
How can we get started on this?
This can be divided into let's say four practical guidelines.
First, we need to start with the business metric.
We shouldn't lead with the technology first.
We need to identify the business KPI, that we want to improve and work backward
to design the right architecture.
Second, we need to design for degraded operation.
We always need to assume that our network will fail.
Then we need to build our edge applications to function autonomously
and then recover gracefully.
And the next implementation guideline is we need to measure everything.
We we cannot optimize what we cannot measure, right?
We need to implement comprehensive telemetry from day one to
understand our system's behavior.
Okay?
And the fourth guideline would be we need to use containerization.
Okay?
So deploying our edge workloads in containers.
We'll give you consistency, reproducibility, and make your update
process much safer and reliable.
So the rust code snippet that you can see here shows a few of these
principle in action using parallel processing and efficient buffer
handling to process the sensor data.
Okay.
Yeah.
So let's go to the next slide, the final one.
Okay.
As we wrap up, I want to leave you with three main takeaways
from our discussion today.
First, these transformative patterns will give you a blueprint a proven blueprint
actually for designing responsive and intelligent architectures that processes
data in the most optimal location.
Second the first provides the ideal foundation for these systems,
offering the safety, concurrency, and performance needed for mission critical.
Edge deployments.
And finally this all leads to real business impact.
By thoughtfully combining the immediacy of edge computing.
With the power of the cloud we can create systems that
deliver truly measurable value.
The goal here is to build intelligent, resilient systems that bridge
the physical and digital worlds.
Okay.
That wraps up our presentation.
Thank you all so much for your time and thank you for giving me this
opportunity to speak at this conference.
Have a nice day.
Bye-bye.