Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone and thank you for joining this Conative conference.
I'm Juhi SHA IEEE, senior member and director of
Applications Development at a DP.
Today we will explore how agent ai, autonomous goal-driven
software agents can coordinate.
The continuous co-evolution of microservices and micro frontend
in cloud native architectures, microservices give us scalability,
resile, and independent deployments.
Micro frontend extended those principles to the UI leader, but the
real challenge is synchronization.
How do we keep dozens of independent teams and components moving at different speeds
without breaking the user experience?
In this session, I will walk you through the problem, the architecture
of a agent control plane, a real world case study and future directions
that move software from automation.
Omi,
let's start with the foundation.
The Morton architecture, landscape microservices, decompose applications
into independently deployable services.
Each owns a business capability, for example, payment catalog or user profile.
And can scale or deploy independently.
This improves resilience and team autonomy.
Now extend that same principle to the UI layers.
That's where micro frontends come in.
Teams can deploy frontend modules in React js Angular.
And plug them into a shared container.
This gives incredible speed and parallel deployment, but as you will see, it
also creates an integration nightmare.
Each team has its own framework, release cycle, and dependencies.
So when one part of the system changes, others must catch up instantly.
And that's where things start to break.
As systems grow, we enter what I call the coordination crisis,
API evolve new endpoints.
Renamed Fields different responses, structures, framework, heterogeneity teams
use react, angular view independently, manual bottlenecks, QA cycles,
multiply rollbacks become routine.
In large enterprise, you might have 25 microservices and 10 micro
frontends spread across multiple teams.
Each change must be communicated, tested, and synchronized.
That coordination effort grows exponentially.
With every new component, the result, frequent integration breaks, inconsistent
UX and deployment slowdowns, even though we moved to microservices to go faster.
Let's zoom in on what I call the moving target problem.
Every time a backend service evolves by adding a new
endpoint, modifying a payload.
Or changing latency profiles.
The dependent micro front ends must adapt almost immediately
if they don't.
Users see runtime mismatches, broken interactions.
Or sluggish experiences.
So while our A architecture is decentralized by design, its dependencies
remain tightly coupled in practice.
This is the core paradox.
Of modern microservice ecosystems, we have gained team autonomy,
but we have also inherited a constant chase to stay in sync.
Enterprises running 25 plus microservices.
10 plus micro front ends phase a mathematical explosion
of version combinations.
That's why traditional coordination models, emails, tickets,
manual testing, can't keep up.
We need systems.
That can sense and respond autonomously to this constant movement.
So how do we bring order to this moving target?
The answer is agent ai.
These are autonomous software agents with three core capabilities,
perception, reasoning, and action.
They perceive the state of the system monitoring.
Backend APIs, frontend matrix, and telemetry.
The reason detect schema drift or behavioral anomalies and plan adoption,
they act.
Apply validated changes with rollback mechanism, if anything, violates policy.
Essentially, we are teaching a software to self-diagnose and
self-heal just as autonomous vehicles.
Perceive and navigate their environment.
Urgent AI agents navigate code and configuration changes
to keep systems aligned.
The Agentic control plane or a CP.
Is the framework that orchestrates these agents.
It has four plates.
The first one, observation plane collects telemetry through open
telemetry, API, gateway logs.
Real user monitoring data.
This is how the system sees itself.
The second one, cognition Plane host the AI Brains Reinforcement learning agents.
Optimize scaling and configuration.
LLM agents generate code or config patches.
Contract agents monitor schema capability.
The third one, execution plane.
Applies changes using GitHubs pipelines and service mesh controllers.
Support cannery and blue green releases with automatic rollback.
The fourth plane policy and governance layer.
Defines safety envelopes, approval thresholds, and auditable logs.
These four planes work together in a loop of perception, reasoning, and action,
continuously optimizing the system.
In real time.
Let's see how these comes together in practice.
So the first one is drift detection.
The observation plane notices a schema change, perhaps an API field.
Has been renamed.
The second one, impact Analysis.
The cognition plane identifies which UI components consume that field.
The third one, solution generation, the LLM agents.
Suggest a minimal code diff or adapter to restore capability.
The fourth one, safe deployment.
The execution plan deploys it to cannery environment,
the fifth one, validation and rollout.
Service level objectives remains stable.
The change is promoted.
If not, it auto rolls back.
What's important is that no human has to manually coordinate these steps.
They happen.
Under governance automation.
Here's a real enterprise example to quantify the impact.
A global e-commerce organization with 25 microservices and
10 micro front ends adopted.
Agent control plane
before A-C-P-A-P-I changes regularly, broke front ends
and caused deployment delays
after deployment.
Adaptation latency improved by 35%.
Front-end errors dropped by 40%.
SLO compliance rose to 25% manual coordination reduced to 50%.
In other words, they moved from reactive fixing.
To proactive autonomous alignment with traceable audit logs for every action.
Let's balance the discussion with both benefit and challenges
the strategic benefits.
Continuous synchronization between backend and front end, reduced
downtime through auto rollback.
Explainable auto decisions are logged and auditable.
Enterprise scalability across frameworks.
Teams, what are the challenges?
Training overhead for re enforcement learning needs.
Quality telemetry.
Trust building for LLM generated code, it requires.
Sandbox validation.
Complex governance.
We must define policies that balance freedom and safety.
This is where the human element remains essential.
AI augment stream.
Not replaces them.
Looking ahead, this convergence of microservices, micro front ends and
agentic AI is just the beginning.
Three areas stand out.
Explainable reinforcement learning, ensuring that the reason behind
every agent action is transparent knowledge graphs, mapping
dependencies across services and UIs to predict drift before it occurs.
Federated learning.
Allowing different teams to share adaptation strategies
without exposing sensitive data.
Our goal is not just to automate, but to create systems that learn
and govern themself responsibly.
To wrap up, the agent control plane proves that autonomous
coordination is not science fiction.
It's already improving enterprise reliability today.
By embedding AI agents that perceive reason and act under governance, we
enable continuous co-evolution between microservices and micro front ends.
This is the next step in the DevOps journey
from automation to omi.
Thank you for your attention and for being part of the future
of cloud native engineering.