Transcript
This transcript was autogenerated. To make changes, submit a PR.
Good morning.
Good afternoon, everyone.
I'm Ash Lakshmi, currently working in DT data.
US with over 22 years of experience in quality engineering,
especially specialized in testing, architecture, health plan systems,
and enterprise integration.
Throughout my career I have played a. Pivotal role in ensuring seamless
implementation of enterprise application, accelerating healthcare system
transformations, and enhancing compliance through advanced test automation strategy.
I'm here to be to talk about how observability can
transform quality assurance in.
Health insurance platforms with over two decades in IT and quality engineering.
I have firsthand how traditional QA falls short in today's fast moving,
highly regulated healthcare environments.
That's why this talks about evolving from testing to.
Observing and how that shift unlocks mass U value.
It can be for providers, it is for members and any payers actually
in a health insurance platform.
The industry is in such a way that where trust, complaints and experience
are not negotiated, but we need.
Monitoring strategies that are not just reactive, but proactive,
insightful, and actionable.
Okay.
Now why observability matters.
Let's start with why.
The healthcare sector today loses an average of 43% in
revenue during system outages.
The difference between having and not having observatory tools, the
mean time to resolve is 68 minutes without observability, but with
observability it is just 12 minutes.
And then there is uptime expectation, availability of the
systems, which is 99.99 percentage.
This is not just a target, it is a necessity for member experience,
trust, and the regulatory compliance.
In other words, observability is not luxury.
It's a. Survival tool.
Okay, let's think of an example In a health insurance, a parent is
trying to update their insurance coverage for their child's surgery.
They log into the member portal and portal crashes and doesn't provide
that information what they need.
It was dumb.
It's a failed promise.
What is an experience of a member?
Emotionally the person is trying to go going through the surgery, whereas
from an IT support standpoint, as a payer, we are not able to provide
that service what they needed.
It.
It's a failed promise.
So that is our observability, is how we prevent these moments and make it more
user experience, member experience, provide a smooth transition journey.
That is all it's about here actually.
Okay.
Now what is the observability Trinity?
Okay.
It is not just about logging.
Metrics logs and traces actually.
Okay.
How this is categorized into, let's take about the metrics.
What are the quantifiable insights?
Think about dashboards.
Think about claim processing tank by complexity, prior authorization, approval
rates, member portal latency during open enrollment, or a peak enrollment period.
Actually.
And then logs.
It tells the story behind system events, which is vital for audits, compliance,
and debugging a member access their data.
Where the log shows what PHA data, PA data has been accessed.
It also narrates a detailed error locks when a provider
lookup fails what it failed.
HIPAA compliance.
Okay.
Anything on audit trail, which will show it and log offers context as
well as a compliance coverage, and then traces, it connects the dot,
showing that end to end journey.
What does it mean?
It is, it claims will go through multiple systems.
And my in a PA architecture where 20 plus microservices, which will go
through it, checking that eligibility, checking that provider provider validity.
And then what bene what is the pricing configuration?
What benefits has been applied?
Do it does they need any oath?
What is the status of the claim?
It's a end-to-end journey, which is, goes through with, through various services.
And then all the, each step it monitors and trace
to track the
processing details of it.
Okay?
So in fact, traces are vital and distributed a architecture,
especially with APAs third party systems and legacy bridges.
Okay?
Together they empower.
Teams to monitor, diagnose, and optimize complex workflow.
Now coming about regulatory benefits.
Okay?
Observability is not a compliance enabler anymore.
It is a regulatory, which is a mandatory requirement actually, especially
think about HIPAA audit readiness.
Tracking who access PHA and when, or being able to trace every step of a
claims journey or even automatically preserving access evidence
during regulatory investigations.
It's not about fixing things fast, but also providing you
are doing the right things.
Q eight teams often overlook this.
Observability isn't just.
Technical.
It's a legal armor.
Okay.
Where then coming on to other, what are the metrics?
Which they measure it, right?
Hipaa audit readiness, claims, processing verifications, SLA,
documentations, evidence, preservations.
All those are, being part of that comp compliance benefit,
which is being monitored
now.
Now, let's think about some of the metrics in the real claims
processing auto adjudication rate.
Obviously, system need to handle the amount of claims, what we received, and
what is our ajudication success rate.
First time, first pass, resolution percentage, average processing duration.
Here are rates by claim type.
It's all about the claims data processing data, which needs to be
measured from a metrics standpoint.
Member experience, portal availability.
I was sharing an example of member portal available for a surgery.
It is important authentication success rate.
If you're searching for a provider, what accuracy is the
right provider being shown?
And right.
Services being offered.
And then session duration metrics.
Okay.
Average session, how long it takes to that member is there in the
portal, which they can use it.
And then system performance last, which is a database query, a PA
response time infrastructure, resource utilization, and batch processing.
Completion rates.
Okay?
This is where domain specific observability makes a difference.
You can't improve what you can't measure.
And these kps directly linked to cost, compliance and customer satisfaction.
Now the question comes up, how do you implement it?
Here is how to make it happen.
Assess your gaps and critical systems instrument them setting
up metrics, logs and tracers.
Centralize everything into a single platform.
Use unified observability platforms to correlate events, visualize
it, create dashboards tailored by roles, compliance operations.
It is and automate AI driven anomaly detection and predictive
alerts to RCA root cause analysis, automated remediation.
We can do it from an automation standpoint.
This roadmap shifts QA from a reactive call center to a
proactive reliability enabler.
Okay, now let's talk about how observability heading actually.
Okay.
The industry wide, there are tools such as Datadog, Dynatrace, Splunk.
Multiple tools available in the market, which provides all specialized
categor specialized capabilities for monitoring regulated healthcare systems.
Okay, now, from a yay standpoint, okay.
How this observability is heading.
It is one is anomaly detection.
That means identify the fraud or the slowdowns in claim workflows.
Before users notice, predictive alert your forecasting system,
stress before it crashes, member access, provider access, or anything.
Then root cause analysis cut down days of debugging into minutes,
self-healing, automatically restart services or rewrote request so that
member or a provider, or any processing provides a smooth experience.
A doesn't replace teams, it empowers them to move from firefighting to foresight.
Okay.
Let me take a case study of Anthem Blue Cross Blue Shield.
The challenge, what they face it is they were.
Frequently facing portal outages which passes a delays averaging 12 day claim.
Processing delays 12 days and outages affecting 15 million users.
They implemented distributed tracing across 200 plus services and built real
time dashboards with claims pipelines.
What is the result mean?
Time to resolution dropped from 45 to five minutes.
And claim processing time down by 62 percentage.
So it's a lessons start with high impact business areas and see how
trust cross-functional teams improve their observability or options.
Okay, now challenges, right?
It's this is the main, we know what is a problem.
We know what is a solution.
What challenges we have to address it.
One is security concerns.
Balancing observability with P-H-A-P-H-A must be protected
even while at observing it.
Fast management storing logs and traces add up and you need policies
till how long you should keep it to monitor and do that analysis.
Then legacy system I. You need to have a minimal disruptions, and
many of those legacy systems may not be built for observability teams.
Do they have skills you may need?
We need to provide training, qa, ops teams, all those criticalities
to be built out actually.
So PACE yourself built incrementally.
Use pilot services to prove value.
Okay, so what did, what is the next step from an point of view standpoint?
Right here is what I recommend,
assess, start with one critical services, like a claims or
eligibility or a member quota.
Do a pilot of it, instrument it, track metrics, add logs, map
traces, and then try to scale based on pilot's, learning rollout,
observability across the platform.
And then try to optimize with create dashboards, apply a align with
compliance, or you can adjust based on it.
And don't forget QA ops.
Security and product must partner in this journey, let observability
become our invisible advantage.
So to conclude it, observability isn't just a tool set, it's a mindset shift.
It redefines qa.
It's how we shift from defect detection to experience protection.
From compliance, stress to audit, confidence from reactive to predicting.
Okay.
If you would like to explore further or need guidance, feel
free to, I have provided an email.
Feel free to reach out.
I'm happy to help.
Thank you.
Thanks.