Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, and thank you for tuning into Con 42 Cube, native 2 0 2 5.
My name is Raku and I'm a principal Salesforce engineer and platform
architect, where I lead the design of large scale AI powered enterprise systems.
Over the last decade, I've worked across organizations like Intel Trucks
and Zenefit Building, event driven and modular and highly scalable architecture.
What brings me here today is a growing concern I've witnessed across
industries, AI systems that are powerful, but fundamentally untrusted.
In this session, we are going to explore practical framework I call
cube Native Trust, an approach that blends architectural design, ative,
native tooling, and transparency.
First thinking to make AI systems not only powerful, but respectable,
explainable, and ultimately reliable.
Let's dive in.
Here's what we'll cover in this session.
We'll start by excluding the trust challenge, why even accurate
air recommendations of will fail in enterprise settings.
Then we'll introduce five architectural pillars that will
make AI systems more transparent.
Trustworthy, everything from confidence scoring to audit tools.
Next, I'll walk through a practical implementation blueprint using
Kubernetes native tools like SAO, GitHub and Open Telemetry
to operationalize this framework.
We'll also look at realtime world use cases.
And finally, I'll leave you with actionable takeaways you can use to start
building trust into your own AI workflows.
Trust challenge.
AI has incredible potential, but in enterprise environments,
we often see a pattern users ignore those recommendations.
Why?
Because they can't see how it works.
They can scrutinize the logic, trace the data, or evaluate certainty.
So despite.
Millions invested.
These tools end up sidelined.
The real challenge isn't in prediction.
It's transparency.
If stakeholders can't validate what they are seeing, trust breaks down.
And without trust, even the most advanced models become
why enterprise AI is sidelined.
Let's break down exactly why AI fails to gain traction in the enterprise.
Lack of transparency.
Most models act like black boxes.
No insight into how their decisions are made.
Missing context.
AI doesn't often adapt to business realities like regulatory limits or
budgets, thresholds or workflow quirks.
Insufficient observability.
We can trace inputs back to predictions.
What means you can't add audit or debug you?
And finally, the big one, trust gap between teams.
Engineers build the model, data scientists optimize it, but
businesses use user business users don't understand or trust it.
Fixing the gaps starts with the architecture.
Trust is a first class architectural concern.
Five foundational design pillars for transparent ai.
Pillar one, contextualized confidence scoring.
Not every prediction is equally reliable and users know that.
So instead of flat confidence score, give them monthly dimensional insights.
How confident is the model for this specific context?
And how does this prediction compare to similar past scenarios?
And how should the user interpret the score?
By mapping the confidence to their mental model and you make the AI action
output actionable and trustworthy.
Digital pillar two.
Source traceability.
Imagine you handled a recommendation and you want to know where it came from.
With traceability, you can answer what data is used, how it's processed, what
features did the model rely on, and where were there any business rules applied?
This is like turning on the version history for every decision your AI makes.
It's essential for auditability, for governance and user confidence.
Adaptive thresholds.
Most AI systems use static thresholds.
The only active conference is greater than 70%, but that's rigid and naive.
What you really want is adaptive thresholds that respond to use
case criticality, operational constraints, risk tolerance.
A 60% confident prediction might be acceptable for marketing,
but not for healthcare.
Adaptability is the key.
Progressive UI disclosure.
Not every user needs the full more internals, but some do.
That's why Progressive disclosure works level one.
Simple summary plus confidence score level two, key drivers and alternatives.
Level three, full lineage model parameters.
Future importance.
It's like an expandable explanation panel.
Let the user choose how deep they want to go without overwhelming them.
Pillar five, end-to-end audit trails.
Finally, the unit verifiable logs for what was predicted, why and when.
This means immutable logs, open telemetry integration, human
feedback capture outcome validation.
Most importantly, compliance ready reports.
This isn't just a nice to have for many industries.
It's a requirement.
Now let's talk about implementation blueprint.
Let's move from theory to practice.
How do we build these trust features?
Kubernetes gives us the right foundation.
It is scalable, it is observable, and it integrates with the
most modern CI CD pipelines.
We'll use that to our base
kuber implementation.
Here's how each pillar maps to kuber tools service mesh, like ECO or linker
to capture model traffic latency and trace requests, GI tops like agro or
flux version control for models, trace, which version did what opened Elementary.
Unified tracing from infrastructure to business outcomes will be
or to enforce governance, adapt threshold and secure the pipelines.
Together, these tools, operations operationalize trust.
Now reference architecture it brings all together in it's a modular
system where confidence scoring is computed in microservices.
Traceability is insured via version pipelines, and the
thresholds are policy driven.
The UI exposes layered explanations and auditors log everything
from prediction to outcome.
This is trust by design.
Let's move on to real world application examples.
Let's make this tangible customer segmentation.
Explanations include why a user fails into that, falls into that segment with visual
metrics, leads, routing, sales steps.
Can you know, trace why the lead was prioritized so that improves
adoption, patient risk scoring, clinical UIs, expose risk factors,
thresholds and scoring logic.
Infrastructure optimization resource scaling decisions come with the
cost, context and past patterns.
These systems aren't just smart, they can be trusted.
So the key trust was here.
If you remember one thing from this session, it's this.
Trust is architecture.
Build it in, don't bolt onto it.
Trust requires design.
Confidence traceability, auditability Use Kubernetes tools to implement it.
Progressive disclosure empowers users trails, ensure accountability feedback
loops enable continuous improvement.
Let's get started with the trust blueprint.
If you're ready to bring this to your team, here's how to start.
Audit your transparency gaps.
Map mental models to explanation needs.
Add confidence according to one model.
Start tracing with open telemetry.
Add feedback loops into ui.
Start small.
Build trust, scale.
Smart.
Thank you.