Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone.
I am Vi Nading, a principal software engineer, and today I want to talk about
data governance in AI driven banking.
When you think about automation in finance, think back to 2008,
the global financial crisis.
We didn't have generative AI then, but we had algorithms making decisions
with opaque logic and incomplete data.
Fast forward to today, AI is exponentially more powerful.
It's approving loans, flagging fraud, and even advising investments in real time.
The difference, we don't ground that ai, we risk repeating 2008,
but at mission speed, remember.
So yes.
Today's talk is about the least exciting word in AI governance.
But ironically, it's the one thing that keeps AI from
becoming tomorrow's headline.
So let's dive in.
In the last decade, banking has quietly become one of the most
robotic industries on earth.
In 2016, for instance, JP Morgan introduced its coin platform, a contract
analysis AI that completed in seconds.
What used to take lawyers $360,000 a year.
Around the same time fraud detection models began processing billions
of credit cards transactions daily AI isn't the future of ranking.
It's already embedded in its nervous system.
Think of the last time you got a transaction alert before
you even checked your balance.
That's AI driven.
Robotics in action scanning, predicting, reacting, all gone
by data pipelines we rarely see,
but there's a catch.
80% of the financial institutions say data governance is the
biggest barrier to scaling ai.
The average deployment delay is six to nine months.
Purely due to compliance reviews.
When GDPR took effect in 2018, several major banks had to pass their projects
for months just to trace how customer data was flowing into machine learning
pipelines, one European bank discovered 30 different versions of the same
data set being used across models.
None with consistent consent tracking.
Remember, consistent consent tracking is the key.
That's the governance gap.
AI moves at milliseconds, but our controls move at human speed.
Without governance, AI innovation turns into AI improvisation, and in banking,
that's expensive, not tolerable.
What do we do about it?
The legacy systems fall short in several parts.
They're not designed for business intelligence.
They're slow batch oriented and blind to model lineage.
Remember the word model lineage.
The first generation of governance tools emerged in 1990s.
They were built for static warehouses, monthly reports, and Excel exports.
Then came AI pipeline dynamic, constantly evolving auto retraining models.
Those old systems simply can't track what happens inside a neural network trained
across 50 data sources, for instance.
When L-I-B-O-R Scandal broke in 2012, regulators demanded proof
of data lineage in risk models.
Most banks couldn't produce it.
They couldn't because their systems had no unified trays.
From data origin to decision output, why was this decision taken?
What was the input data used?
That's the kind of gap modern AI governance must never repeat.
So here we are introducing the unified framework.
To address this, we want to build a unified data governance framework
engineer specifically for AI driven robotics heavy banking system.
It embeds governance directly into AI lifecycle from feature engineering
to model deployment and monitoring.
It has four pillars, meta management, where it's the central catalog for
every single data, asset and model.
Then comes lineage tracking.
This is super important.
Every transformation, every dependency visible end to end
Then comes quality monitoring.
Any automated drifts, anomalies by and performance checks had
to be performed at rapid speeds.
Finally, the cloud native oversight.
Sub 200 milliseconds query response tracking 5,000 or 10,000 model
updates daily because fraud does move quicker, AI has to be on top of that.
For instance, if you follow the UK's open banking rollout in 2018, regulators
required every participating bank.
To log data moment and consent lineage, that same philosophy, transparency through
transparency, through traceability, is what this framework enforces.
But for AI systems, it's let less like a gatekeeper and more like a air traffic
controller, ensuring thousands of AI models take off safely without colliding.
Then comes security.
Security is the other half of governance.
The 2020 Solar Winds breach reminded us that software supply chains
can become invisible entry points.
For ai, the equivalent risk is data lineage hijack when upstream
data is tamper and silently corrupts the models and the output.
So what do we do?
We want role-based, role-based access control for tens of thousands of users.
We want full audit trail for every access event, we need integration
with enterprise identity systems.
Let's take examples.
The framework we build has to support G-D-P-R-C-C-P-A,
basil three and BCBS 2, 3, 9.
For instance, it mandates risk data to be accurate, complete, and timely.
That's nearly impossible without automated lineage and validation systems.
For instance, when European Central Bank audited AI credit
models in 2022, the number one failure point was missing lineage.
Between feature data and regulatory reports.
So governance season paperwork, it's survival in banking.
Let's jump into a real world case.
For instance, a tier one global bank adopted this unified
governance framework in 2025.
Within months, they achieved.
30% foster AI deployment, no manual compliance bottlenecks,
40% stronger audit readiness.
Traceability became automatic a hundred percent compliance across
GDPR basis three and BCBS standards.
And here is an interesting anecdote.
During an internal audit, the regulator asked them to
trace a fraud detection model.
Back to its training data in 2019, that would have taken weeks
or months, even months in 2025.
The answer came up in 0.3 seconds through a single governance query.
That's not just efficiency, that's cultural transformation, and
that's what we are headed towards.
How do we scale this responsibility?
How do we ensure robotics ready?
Governance?
There are four pillars.
If you can think of foundation, build on cloud native infrastructure
with embedded governance.
Link your AI lifecycle tools directly to governance pipelines.
Automation, continuously monitoring for bias, RIF compliance anomalies.
Finally, scale, expand innovation safety across all AI workloads.
An interesting anecdote comes to my mind in 2012 when Netflix.
Move its entire recommender system to the cloud.
They built observability into the migration from day one.
Banking needs to take the same approach.
Governance first, not governance later.
Here's what I want to leave you with.
Governance is an enabler.
It doesn't slow innovation.
It unlocks it.
Again, it doesn't slow innovation.
It unlocks it.
If we don't follow governance, we end up slowing our business.
Robotics demands real time systems.
Governance must match AI's velocity without which it won't work.
Compliance, trust.
Trust.
That's the cornerstone of AI adoption.
Every financial revolution, be it ATMs in 1970s to blockchains in 2010s face
skepticism until governance caught up.
Today's AI revolution is no different, especially for banking.
We are not just building models.
We are building systems that society has to trust with its money.
So next time you see an AI approving, a mortgage, flagging a transaction, or
detecting fraud, remember behind every intelligent decision should stand and
equally intelligent governance system.
Thank you.