Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
I'm Nikita ti.
I'm a manager of Data and AI Solutions.
It's an honor to be here at conference 42 to share my perspective on a topic that
is shaping the future of M Healthcare M Lops at scale designing govern
AI pipelines for healthcare impact.
And I like to open with this statement from the deck.
In healthcare, AI doesn't just need to be powerful.
It needs to be trusted.
That single line captures the heart of today's discussion.
Let me give you an example.
Think about a model that predicts ICU admissions with 95% accuracy on paper.
It's extremely powerful, but if clinicians and regulators contrast
how it reach those predictions.
It may actually never be used.
Governance is what makes that model usable and impactful.
Why MOPS matters in healthcare, AI is transforming healthcare.
It's enabling everything from advanced diagnosis to improved
operational efficiency.
But if we look at the challenges highlighted here, we see that
innovation often stalls because of.
Fragmented pipelines, data silos and compliance risks.
Without a structured approach, trust is difficult to establish.
And scaling AI across healthcare systems become nearly impossible.
How can we solve this?
Governed, scalable, and secure MLO pipelines address these issues by
unifying workflows and embedding compliance into every stage.
This enables faster innovation, safer integration into clinical
practice, and more impactful outcomes.
The result is better patient care and more efficient operations.
An example would be a hospital that builds a pneumonia detection
model from chest x-rays.
It works extremely well, but fails to expand across the
system due to inconsistent data formats and compliance concerns.
A govern pipeline standardizes and secures the flow, allowing
that model to scale safely.
So we see that a govern pipeline is what solves for it, solves for scalability.
Here
we have four major challenges we must overcome to scale AI
responsibly in healthcare.
First is data governance.
Second is model trust.
Third one is integration, and fourth one is operational scalability.
What does data governance mean?
It's ensuring that we are compliant with HIPAA in US GDPR in Europe
and protecting PHI and PAI at every stage of the pipeline.
If lab results are ingested without proper de-identification, a single breach could
compromise thousands of patient records.
So governance is extremely important.
How do we model trust?
We model trust by addressing bias, drift and explainability.
To build confidence in AI decisions, clinicians need to understand
and trust why a model is making a recommendation and integrating these
seamlessly into clinical workflow.
If the tool disrupts care delivery or adds extra burdens, they won't
be adopted by doctors or clinicians.
Let's say there's an AI trash tool.
It requires doctor to log into a separate portal.
It often fails the same tool integrated directly into EHR would more likely
succeed because there's no extra clicks.
There's no extra logins, and how do we scale it?
Expanding from small pilots to enterprise wide.
AI deployments remain one of the toughest barriers.
Example, a readmission risk model may be piloted in one hospital, but
scaling across 50 requires standards, pipelines, automation and monitoring.
So taking all this together, these challenges highlight that AI success is
not just about developing advanced models.
It's about building governance, building trust, and infrastructure
required for adoption at scale.
How do we define ML ops for healthcare?
ML ops in healthcare combines the best of DevOps practices with the AI ML lifecycle
while embedding strict governance to ensure compliance, security, and trust.
The MOPS lifecycle consists of data ion pre-processing, training,
deployment, and monitoring.
The key difference in healthcare is that compliance and security must
be embedded at every stage, so it's not an afterthought, but in every
step of the lifecycle, we want to embed a compliance and security.
What does that this mean?
It means that all data flowing through the pipelines is traceable and auditable.
Models are built on high quality bias, free data sets, accuracy, accountability,
and compliance are maintained end-to-end.
So I work for a dialysis company.
So let me give you an example.
Kidney disease progression model that ingest EMR data.
We processs it with PHA, masking trains with bias checks deploys via secure
APIs and monitors Drift quarterly.
This is ML Ops in action.
It's not just powerful, it's trusted.
There are four pillars to support responsible AI delivery.
Pillars of governed AI pipelines, data lineage and quality model
governance, security and compliance, scalability, automation and monitoring.
What is data lineage and quality?
It's ensuring all data is traceable, accurate, and we are
building a trustworthy foundation.
And how do we model governance?
We want AI models to undergo validation, explainability checks, and ethical
reviews to safeguard responsible use for security and compliance.
We want sensitive patient data to be protected through
encryption and monitoring that aligns with HIPAA and GDPR.
How do we scale, automate, and monitor?
CICD pipelines.
Accelerate deployment and continuous monitoring with drift detection
ensures AI remains effective.
So together these pillars provide the re reliability and transparency that
clinicians, regulators, and patients need to embrace AI in healthcare.
Let me walk you through the workflow step by step.
So this is the architecture of a govern ML ops pipeline.
We have ingestion first.
The first step in the workflow data is collected from various sources,
from EMRs, from IOT devices, imaging systems and claims platforms.
This creates a rich foundation for robust models, and in the governance
layer we have data catalogs.
We put in access controls and audit trails to enforce compliance and ensure trust.
And for ML workflow, we train the models, we validate them, and we
check for explainability before they're put into production.
And for deployment models are deployed securely through APIs and
containerized environments with hybrid.
Our multi-cloud setups providing flexibility and for monitoring,
we continuously monitor to detect drifts, anomalies and
performance issues in real time.
Human in the loop validation ensures the AI demands clinically
relevant and trustworthy.
For example, IOT devices from dialysis stream patient
vitals into carbon pipelines.
If sudden anomalies like a drop in blood pressure appear, clinicians are
alerted before the patient deteriorates.
This is ML Lops Saving Lives.
Okay.
Generative AI in healthcare, ML Lops.
This shows how generative AI is adding new capabilities to healthcare ML lops by
automating data curation and automation.
By enhancing explainability producing clinician friendly summaries, accelerating
documentation and coding workflows, adding responsible guardrails such as
bias mitigation and clinical validation.
This means that Gen AI can bridge the gap between complex
models and human decision making.
Instead of putting only a numeric risk score, a Gen AI layer can explain.
The patient is at elevated risk due to abnormal creatine levels, history
of hypertension, and two prior admissions in the last six months.
So it gives us the reasoning behind it, not just the score.
So it's explaining why the patient is at risk, but while the potential
is immense, so are the risk.
Without governance, gene AI could hallucinate or amplify bias.
That is why governance remains very critical here as well.
What is the impact across healthcare?
When govern mop pipelines are in place, the impact across
healthcare is substantial.
We see improved patient outcomes.
We see reduced operational costs, strong compliance, accelerated innovation cycles.
How do we see improved patient outcomes?
AI powered insights enable earlier interventions and better treatment
decisions resulting in healthier patients and fewer readmissions.
For example, a sepsis model deployed in IC flagged risk six
hours before symptoms enable.
So this is life saving early treatment, and how do we reduce operational costs?
Automation cuts, administrative overhead and optimizes workflow.
Automating claims adjudication reduced manually review workloads by 40% in
some health systems, and for compliance govern, pipelines ensure alignment with
regulations such as HIPAA and GDPR.
That minimize exposure and building patient trust.
An example would be a full audit trail.
Short regulators, exactly how an oncology AI system made its
predictions, avoiding finalities and accelerated innovation cycles.
Pipelines enable faster movement from pilot to enterprise
deployment, driving quicker adoption and continuous improvement.
So in short, government ML lops delivers value on multiple friends, clinical,
operational, regulatory, and strategic.
To conclude, ML Lops at scale is essential for ensuring that
healthcare organizations can adopt AI in a sustainable and impact way.
Without structured pipelines, projects will remain stuck as pilots and fail
to reach enterprise scale Governance plays a central role by embedding
trust, compliance, and safety at every stage of the AI lifecycle.
And while generative AI can acceler development and insights, it
must be integrated responsibly.
So as this slide summarizes.
Successes, technical scalability plus clinical adoption plus
compliance first design.
So governance is not the break on innovation.
It's the steering wheel that guides AI safely and effectively into healthcare.
It's just as seat belts and traffic lights that don't slow down cars, but makes safe
driving governance ensures AI can operate at full speed without crashing trust.
Thank you for your time and attention.
I hope this session has given you a clear understanding of why governance
in ML Lops is not just a regulatory requirement, but the foundation for
building trust in healthcare ai.
Thank you.