Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, and thank you for joining my presentation on building
ML ops for autonomous mortgage.
I want to start by asking you to imagine the scenario now you have found your
dream home, you made an offer, and now you're waiting for mortgage approval.
In a traditional world, a mortgage application goes through 40
some manual steps, and it takes anywhere from 30 to 60 days.
And all the while workflows, either manual or digital goes between people
and within different departments.
But imagine if that entire process could happen in three days, and
also with better accuracy and compliance than the old manual system.
And just introduce myself.
I am na.
And I have spent the last several years building exactly these kind
of systems in the mortgage industry.
Today, I want to walk you all through how MOPS principles can transform one of the
most paper heavy and regulation intensive industry into something that can actually
work for customers in the 21st century.
And just to provide a sense of what's at stake, the market industry process about.
$2 trillion in loans and just in the US alone.
And this is an annual figure.
When you apply modern m lops to that scale, with all the regulatory complexity
and legacy system challenges, you get some of the, I mean, fascinating
technical problems to solve.
More importantly, you get the opportunity to make real difference
in people's lives because, you know, to be honest, buying a home shouldn't
feel like navigating a bureaucratic.
You know, maze from the 1980s and just to give you a short you
know explanation about myself.
Now, to give you some context on where I'm coming from, I've been working at the
intersection of AI and financial services, and what I find most interesting is how
MOPS principles had to evolve, especially when we applied them to really regulated
industries like mortgage learning.
Now that $8 million savings.
I mentioned in my PowerPoint isn't just a number on a slide.
It actually represents thousands of families who got approved
faster application approvals with fewer errors and less frustration.
The generative AI chatbot we built now handles about 60% of all customer service
inquiries, which means human agents can now focus on the really complex
cases that need genuine expertise.
Now, why this do special?
And here's what.
It makes mortgage ML lops more fascinating.
From a technical perspective, traditional ML lops assume you can move fast and
break things in mortgage lending.
Breaking things means someone might not get a home that they've always
dreamed about, or worse, you might even violate fair lending practices
and face multimillion dollars in fines.
So we have to get creative on how to apply these principles safely.
I'd like to start by going over the industry challenges on adopting mops.
Let me start by giving a sense of how, just how complex traditional
mortgage processing really is.
Like I mentioned previously, there are 47 distinct steps from
initial application to closing.
And that's for a very straightforward purchase loan.
There are other file forms of loan types that can be even more complex
and have more distinct steps, and, and I also want you to quickly walk
you through a real world scenario.
Now, someone applies for a mortgage through an online
portal on Friday evening.
That application is going to sit in a queue until Monday morning when
a human processor will pick it up.
Then they need to verify the employment of the application.
Owner.
Which means calling HR departments that might not respond until Tuesday.
They have to order an appraisal, which gets scheduling for the following week.
Meanwhile, the credit report needs manual review.
Bank statements get analyzed line by line, and then there are dozens
of regulatory compliance checks.
Now, each step involves different people, often different systems that
don't talk to each other, which is typical in any industry, and there
are countless opportunities for applications to get stuck in queues
and also to fall through the stack.
Here lies the technical opportunity.
From an MLS perspective, you are dealing with incredibly diverse data types,
structured data from credit bureaus or unstructured data from documents like tax
returns and bank statements, and real time feeds from property valuation services.
Now, all of this has to integrate with mainframe systems that
we were built when, you know, probably in the 1970s or eighties.
But when you apply modern MOPS principles correctly to this problem,
you can reduce the 30 to 60 day processing timelines to just three
to five days while also improving accuracy and regulatory compliance.
That is the transformation that we are going to talk and explore here today.
Now I want to discuss the unique technical challenges and regulatory hurdles.
You know, in building AI enabled mops for mortgage industry scaling
is the first reality check.
When I talk about scaling in mortgage, I'm talking about processing a hundred
thousand plus applications daily.
And this is especially during peak period volumes like spring home buying season.
But it is not just about high volume.
It is high volume of incredibly complex high stakes decisions.
Each application.
Might involve dozens of ML models working together, like fraud detection, income
verification, property valuation, risk assessment, and even compliance checking.
The second hurdle is the regulatory complexity.
Now, here's something that makes mortgage ML ops fundamentally different
from almost any other ML domain.
Explainability is not just to have a nice feature, it is a legally required feature.
If we deny someone a loan, we have to be able to provide specific, understandable
reasons that regulators can audit.
That means every model decision needs to be traceable back through the entire
data pipeline with completely lineage.
There was a situation where I remember one regulatory auditor when
an examiner asked about a specific loan decision from 18 months ago.
In traditional systems, that would mean days of detective work going
through log files and database records.
With the proper lops architecture, I could pull up the complete decision
tree in seconds, which models were involved, what data were used, which
business rules were fired, and even the exact version that we're active at
that moment when the rule engine ran.
Now there's also the legacy integration reality that we all know about.
Which is probably the most challenging aspect of the mortgage mops.
We have a critical loan data living in mainframe systems from the 1980s.
There is no TPIs, no JSN, no microservices, just batch files
and data farm mats that literally predate the worldwide web.
Now you end up building entire AI systems whose primary job is to just translate
between the modern lops pipelines and these legacy, you know, systems.
It is like building universal translators for computer systems that
speak completely different languages.
Traditional MOPS principles had to evolve significantly when we
applied them to mortgage processing.
Now let me walk you through how each Korea core area had to be re remanage.
The first aspect is data governance.
Data governance in mortgage ai.
Goes way beyond typical ML governance.
Every piece of data that influences a loan decision must be tracked
with completely nature, not just for dworking, but also for legal compliance.
Now we need to know what data was used, when it was collected,
how it pro processed who had to access it, and how all these
contributed to the final decisions.
Now we developed something what I call compliance first, feature
engineering, where every feature pipeline automatically includes bias detection.
PII, anonymization and audit trail generation.
The feature engineering code itself generates documentation that complaints
officers can actually understand and use in regulatory submissions.
Now, the second MO aspect is model deployment practices and its evolution.
The model deployment looks completely different to, we
cannot just optimize for accuracy.
We have to simultaneously optimize for fairness, explainability,
and regulatory compliance.
The training pipelines include automated fairness, validation across different
demographic groups, bias detection and explainability report generation.
We develop something we call a regulatory aware cross validation architecture where
our validation sets our specifically constructed to test model performance
across different demographic groups.
Geographic regions and market conditions to ensure we are not annoyingly
creating discriminatory outcomes.
The other aspect is deployment becomes complex, especially
because mortgage applications can take 30 to 60 days to process.
If we deploy a problematic model, we might not discover the issues
until weeks later when loan starts performing poorly in the real world.
That's why we developed extended shadow mode testing where new run
models run in parallel with production models for complete loan life cycles.
Even before we considered switching the traffic to these new models
we are essentially running dual processing pipelines to validate
behavior over long timeframes and also for end-to-end applications.
I want to discuss next on the key components.
And challenges in building AI driven mortgage data pipelines.
I wanna start with complexities in data injection.
Now let me paint a picture of what data injection looks like.
In mortgage ai, you are dealing with PDFs that might be scanned at weird
angles bank statement with hundreds of different formats depending on the bank.
Now, pay stubs may come from different companies that all format them very
differently, and this complexity.
That I'm talking about is just on the document processing side.
Then you have real time data feeds from credit bureaus, property valuation
services and employment verification systems, each with their own API quirks
rate limits and vocational outages.
During PC periods, we might be making about 50,000 to a hundred thousand
API calls per our personal systems, and each service has different.
Reliability characteristics and failure modes.
Now these are not even our biggest challenges yet.
Our document processing pipeline is probably one of the most
complex parts of the entire system.
We use a, an ensemble OCR model, la think of it as a combination
of different OCR models because no single OCR model can handle all
different document types effectively.
Then we have named identity recognition models that are specifically
trained for financial documents.
They understand the difference between gross income and net income, and they
can also identify debt obligations versus assets, recognize different types
of employment documentations as well.
But this is where it really gets interesting.
From an ML ops perspective, we have to maintain separate discrete model
versions for different documents.
Pay stubs from 2020 may look different from one in 2024.
And the models need to be able to handle both types of documents
because people submit historical documents as part of their
applications to, you know, especially to prove their credit worthiness.
Then every single piece of data flowing through these pipelines have
to be handled with extreme care.
Now again, we implemented what we call Privacy by design pipelines
architecture, where PII gets automatically identified and masked.
Sensitive data is encrypted, both addressed and in transit, and every
access is logged for audit purposes.
The feature store becomes very critical here, especially because it allows us
to pre-compute these features while maintaining strict privacy boundaries.
The credit scoring model never sees raw bank statements at all.
It only gets carefully engineered financial behavior features
because it preserves privacy.
And also it enables providing accurate decisions.
Now there's also the storage reality that we all have to face.
We maintain multiple storage tires optimized for different use cases,
hard storage for real-time decision making with less than 50 millisecond
retrieval requirements, warm storage for model training and experimentation.
And the last one is the cold storage for long-term regulatory.
The challenge here is to maintain consistency across all these tires and
also ensure that we are maintaining performance requirements training
data for mortgage AI presence.
Some, like I said, unique and fascinating challenges.
You need historical loan performance data to understand what makes a good
loan, but you also need to account for changing market conditions.
Evolving regulations and demographic fairness requirements because these
change tend to change if not frequently.
Occasionally, the data itself tells stories about economic cycles,
regulatory changes, and social evolution.
Now we generate synthetic data strategically for such edge
cases and bias mitigation.
Now let's say for example.
We might not have enough historical data for a certain demographic group or in
certain or specific geographic regions.
So we have to use generative models to create realistic synthetic applications
that help ensure our models perform fairly across all populations.
Now, this is not just about filling data gaps.
Think of it as a proactively testing model behavior in scenarios.
In which you haven't seen how to handle them correctly.
Also, rather than building one massive model that does everything end to end
you have to build specialized models for different aspects of the loan decision.
Like, you know, one model for income verification and one for fraud detection,
one for property valuation, and one for compliance checking, et cetera, et cetera.
And you have so many different workflows in an end-to-end you know, market
process, application processing.
Each model is optimized for its specific task, and they communicate
through our event events, architecture.
This approach gives much better explainability because we can
trace decisions back to specific models and its particular features.
It also makes debugging and iterative improvements much more manageable than
trying to tune one giant model and you know, breaking things along the way.
Now, our validation process also goes way beyond traditional ML validation.
We run business KPI simulations before any model goes to production.
We simulate how the model would've performed during historical market
ments, like the 2008 financial crash.
Or during the early pandemic period of 2020, now we test fairness across
protected demographics with static statistical significance testing.
One of our most important validation step is what we call a regulatory compliance
simulation, where we run the model against historical data and verify that its
decisions would not only have violated any fair lending practices, but also
not create any discriminatory impact.
It
Now, deployment in mortgage industries requires a much more conservative
and sophisticated approach than any typical ML application.
We've developed something called a three-phase deployment strategy
that minimizes risk while also maintaining business continuity
and regulatory compliance.
Now the first step is shadow mode.
And shadow mode in this context isn't just running models in parallel.
It's about running complete loan processing workflows.
In parallel, newer models process real applications, but their decision
is will not affect customers.
Human underwriters will will be there to see both the production decision and the
shadow model decision, and they provide detailed feedback on the quality and
reasoning of shadow model recommendations.
Now this gives us incredibly rich validation data beyond any technical
accuracy metrics we can see.
Where the model's decisions would've led to better busi business decision
outcomes, and also for faster processing times and improved customer satisfaction.
The second aspect is canary release sophistication.
Our canary releases are much more sophistication than typical END tests.
We don't randomly assign applications to different models.
We carefully select applications based on this profile demographic characteristics.
Loan types and geographic regions so that we can ensure we are testing across
all the different relevant scenarios.
We also monitor KPIs in real time because canary releases in row if
loan approval rates are shifting outside expected statistical bonds.
If processing times increase beyond thresholds or if customer satisfaction
scores drop, we have an automated system that can roll back these models
within minutes to previous versions.
Now the third aspect is full deployment monitoring.
Even after full deployment, we maintain comprehensive parallel monitoring systems.
Each model decision is logged with complete context and we
track both technical metrics and long-term business outcomes.
We can correlate model confidence scores with actual loan performance, but this
has will happen over months or years.
And later to continuously improve our systems and validate
the decision making quality.
Now CICD in mortgage industry is significantly more complex than typical
software deployment because you are not just deploying code, you're also
deploying decision making systems that affect real people's financial lives
and, and they also need to comply with.
Regulatory federal you know, regulations.
Our model registry is like a combination of docker hub and a compliance database.
Each model version includes not just the trained weights and inference
code, but, but they also contain comprehensive fairness testing results.
They also contain regulatory compliance documents, performance benchmarks
across different market conditions and detailed explanatory reports.
Before any model can be promoted to production, it has to pass automated
tests for, for technical performance, bias detection, regulatory compliance,
and business impact simulation.
We also maintain detailed metadata about which models are approved for
which loan types and for geographic regions and also for market conditions.
Now when it comes to comprehensive testing, our test suits include
standard model test unit like.
Data processing logic and integration test for model serving infrastructure.
However, they also include mortgage specific tests that
don't exist in other domains.
We test model behavior during simulated market stress scenarios, validate fair
lending compliance across demographic groups, and also verify integration with
all the external service we depend on.
Now one of our most valuable test is what we call the end-to-end
simulation loan simulation.
Where we run thousands of synthetic loan applications through the complete
pipeline, and also validate that all the models work together correctly,
and they also produce explainable decisions, and we also make sure that
it meets regulatory requirements.
Now, we also use external configuration management extensively because mortgage
regulations change frequently and we need to adjust business rules
without redeploying all the models.
Our models will pull their decision thresholds, feature weights, and
compliance rules, and business logic from external configuration
systems and rule engines.
At runtime, this capability became very crucial, especially during the pandemic
when lending criteria needed to change almost weekly in response to economic
conditions and regulatory guidance.
The next slide is about orchestration and workflow automation in mortgage envelopes.
Now, mortgage Workflow orchestration is incredibly complex because you're
coordinating dozens of different systems, also external services and human
reviewers across multiple workflows.
Now, these can take, like I said, typically weeks to
complete, or even months.
Now, unlike typical ML pipelines that might run for minutes or hovers.
These workflows have to maintain a steady state and coordinate
activities over month long periods.
Now, managing state, especially across long running workflows is very critical.
You have to use event sourcing extensively so you can reconstruct
the complete history of any loan application and at any point in time.
This is not just useful for debugging.
Now, it is also essential for regulatory audits.
Where we might need to explain every decision made many months
ago using data and models that we see we have updated since
now.
We also use event driven architecture extensively to manage this complexity
when a credit score gets updated during the loan processing.
Now this event triggers cascading events that cross risk models to recalculate not
just the pricing models to adjust interest rates, but also compliance systems
to reverify regulatory requirements all automatically, all in parallel,
and all with proper audit trails.
Now, the beauty of this approach is that adding new models or
changing business logic does not require modifying existing systems.
Everything communicates thoroughly through well-defined.
Events now, which makes these systems much more maintainable and adaptable.
Now we also integrate with dozens of external services like credit bureaus,
property valuation services, employment verification system title companies,
insurance providers, et cetera, et cetera.
Now each API has different a PA patterns, and they all have different rate limits.
They all have different reliability characteristics and failure models.
Now we build a comprehensive service mesh.
That handles circuit braking, retries, fail hours, and back
pressure management automatically.
Now, during peak periods, we, we might hit rate limits, especially with these
external services, so the orchestration system should automatically cure request,
manage these back pressures and optimize the order of operations to ensure
we don't lose the application state or violate service liable agreement.
Now the final you know aspect here is intelligent automation triggers.
Now, this is needed to maintain a sophisticated automated trigger
for model returning now based on performance, degradation, data, drift
detection, or significant changes in market conditions, because these
triggers also include business logic.
We don't return models during the busiest processing periods or right before
important regulatory reporting deadlines.
Especially when system stability is crucial during these periods.
Legacy integration is probably the most challenging and underestimated aspect of
market mops and integrating with legacy systems that are literally older than
most people who are actually working on training or building these models.
We have a very critical loan data in mainframe systems from the 1980s.
Now these systems process batch files once daily or you know depending
upon the frequency of how you know, frequency, it is required.
Now these use data formats that vastly predate json format by decades and have
zero modern a PA capabilities, but they contain decades of loan performance
data that's essential for training accuracy, AC risk models accurately.
Now we ended up building what we call legacy translation agents, especially
specialized AI systems, whose entire job is to speak to just mainframe systems.
Now they convert modern human streams into archaic formats that mainframe
will understand and also translate back the modern data structures.
That is, that, that is an output of our ML models back to mainframes.
So they can both, both work with each other.
Now legacy system often have data quality issues that human
processors have over the years.
Learned to work around over decades now, missing fields, inconsistent
you know, formatting, business logic that's embedded in user interface
screens rather than databases.
Undocumented transformations that happened years ago now.
Our ML systems had to learn to handle all these different cooks automatically.
Development of the sophisticated data quality pipelines can only
detect and often automatically fix these issues is necessary.
When they can't fix problems automatically, they route
applications to human reviewers with very detailed explanation of what
exactly is needed and what is where the attention is needed and why.
On the same note we cannot just turn off legacy systems overnight.
They're running critical business operations that
generate billions in revenue.
So parallel processing validation where both legacy system and
new ML systems process, the same application simultaneously should
be developed and put into practice.
Now we can compare outcomes, build confidence gradually, and shift
processing from legacy to ml as she process these systems work correctly.
This approach not only gives us incredible insight into where the legacy systems
we are making suboptimal decisions, but it also helps us understand the
business value of the ML transformation.
Now from an ML ops perspective, legacy integration requires building
sophisticated data abstraction layers, building modern APIs that
provide consistent interfaces regardless of whether data is coming
from a cutting edge microservice.
Or a 40-year-old mainframe system.
Now, this abstraction allows the ML models to focus on making good decisions
rather than dealing with all the complexity of data source integrations.
Compliance isn't just a you know in mortgage, it's not just going
to be an optional factor or something that you can add later.
It is going to be the foundational architecture that.
Everything that you build is going to be on.
Every model decision has to be explainable in terms that human, underwriters,
customers, and especially federal regulators, can understand and validate.
Now, one has to build, sorry.
Now one has to build automated ity generation directly into
every model in the pipeline.
Now, when a MA model makes a decision, it automatically generates explanations.
That include which features were most important and how those features were
calculated now, what data sources were used in those calculations,
and also how the decision aligns with the regulatory requirements.
Now, these explanations are not just going to be for internal debugging,
like I previously mentioned.
They're used in actual adverse action notices.
That get sent to loan applicants when applications are denied.
Now these explanations have to be accurate, understandable, and
especially legally compliant.
A continuous bias monitoring system should also be running across all demographic
groups and protected characteristics.
These models are trained to optimize for both accuracy and fairness
simultaneously using specialized loss functions and constraint optimization.
If we detect bias emerging in certain model decisions, now we can trace it
back to specific features, training, data issues, and or model architecture
problems and fix them systematically.
Now, this is not only about being ethical, even though it is obviously important.
It is also about avoiding a multimillion dollar regulatory fines and also to
which is in, in turn, re required for maintaining our license to operate
in this heavily regulated industries.
Also, the next aspect, security and mortgage lops means protecting
some of the most sensitive personal information that exists.
It may be income, it may be the customer's income or their assets.
Or the credit history or the employment details, so on and so forth.
Now, the practice of end-to-end encryption, the zero trust networking
and principle of least privilege access controls throughout the entire system.
Architecture needs to be used from dma.
Now, every model interaction is logged for audit purposes, but the
logs themselves are encrypted and also should be access controlled.
We can prove to regulators that we track who accessed what data and when.
But we can also demonstrate that sensitive information was never
exposed inappropriately and shared only in situations where required and
to the right you know, stakeholders.
We also automated as much compliance reporting as possible because
manual compliance processes don't scale with volume of decisions
these systems are going to make.
Now our systems generate regulatory reports automatically.
They track fair lending metrics in real time or where or near real time, and
alert us immediately if any metrics drift outside acceptable ranges defined
by our internal regulatory guidance.
Next I want to talk about how can we implement efficient strategies
to meet the performance demands, performance requirements in mortgage.
Industry are driven by both customer expectations and business economics.
Customers expect instant prequalification decisions when they're shopping for
homes, but comprehensive underwriting needs to balance both speed and
accuracy because the financial stakes and logistics are very high.
Now.
The use of model quantization and pruning helps reduce latency
without sacrificing accuracy.
However, implementing intelligent caching for frequently accessed
features and common application patterns is very crucial.
Using predictive free predictive prefetching where we anticipate what
data and features will be needed based on the current application characteristics.
Now, during peak periods, like the spring home buying season, which
I mentioned previously, we might process three times the normal
volumes, our auto-scaling policies.
Monitor queue depth, processing time, resource utilization, and also external
service response time to automatically spin up additional capacity to scale
up or scale down as and when required.
Now we also optimize cost by using different infrastructure
tiers strategically, like real time decision models.
Run high performance instances with guaranteed resources.
All the while batch processing models use spot instances and preempt VMs.
We also use predict scaling based on historical seasonal patterns to minimize
infrastructure waste, while also ensuring performance during demand spikes.
Now we monitor everything from multiple perspectives anywhere from model and
latency, percentiles, resource utilization across the entire slack, external service
response times, queue, depths, and most importantly, business KPIs like loan
approval rates, and customer satisfaction.
Okay, now our dashboards show technical metrics alongside business impact metrics.
So now we can understand the business consequences of any
performance issues immediately.
Mod k GI systems can fail in ways that they don't show up immediately.
Like other traditional application monitoring model degradation due to
very, very subtle market changes.
Might not become apparent until weeks or sometimes even month later
when loan performance data comes in.
Now, a bias issue might only surface when you analyze approval patterns
across demographic groups over time now using statistical drift detection
to monitor input data distribution, but you also need to monitor business
outcome changes continuously if loan approval rates are suddenly shifting.
Or if you see average processing times are increasing, or if you notice
that your customer satisfaction or dissatisfaction rates are dropping, or
if you see the agent overrides rates are spiking then you need to investigate
immediately, even though your technical metrics are all looking normal.
Now, the incident response includes automated technical
rollback capabilities, not just for infrastructure issues, but also.
Involves human expertise loops for business anomalies.
Now, when technical problems occur, we have automated systems
that can restore service quickly.
But when business metric anomalies occur, we have to immediately
bring in domain experts because they are the ones who understand
mortgage markets and regulations, and also customer behavior patterns.
Now every incident becomes a structured learning opportunity.
You have to conduct thorough post-mortem.
That include both technical root cause analysis and also
business impact assessments.
Now these learning will automatically feed back into your monitoring systems
alert thresholds and also, and, and should also eventually go into a
deployment procedures and, and also helps in your model validation procedures.
Finally, I want to talk about.
Future directions and key takeaways from today's discussion.
Now, LMS are already transforming document processing in mortgage industry.
Instead of training dozens of specialized models for each document type, now
using LMS that can understand and extract relevant information from any
doc financial document with minimal additional training is the next step.
This dramatically reduces the complexity of document processing pipeline.
Now federated learning is becoming an interesting concept in mortgage
industry because it could enable collaboration between institutions
while maintaining strict data privacy.
Now, imagine training, you are training a better fraud detection model using
insight from across the industry or even your competitor now without any
of these institution sharing sensitive data or sensitive customer data.
Now let me leave you with you know, couple of essential lessons from building
mop systems, especially for mortgage industry that apply broad and not just,
sorry, not just mortgage industry, but to any broadly regulated industry.
Now, the first one is regulatory compliance is sent a constraint that you
should always work around from the get go.
It should be an architectural requirement that you design into every component.
Now, trying to add compliance later is.
Exponentially expensive.
And it, it is sometimes an expensive and impossible ticket to, you know complete.
Now the second one is legacy system in integration will always
be more complex than you expect.
So plan for it.
You allocate significant budget time and resources.
Now, you also build abstraction layers that can evolve as you gradually
modernize your legacy systems into, into, you know legacy mode.
As you modernize your legacy systems.
The third aspect is, you know, business outcome monitoring is just
as important as technical monitoring.
Your technical metrics can look perfect, but your business outcomes
are quietly degrading in a ways that, like I said, will not be apparent
until weeks or even months later.
And also change management and stakeholder buy-in are absolutely crucial.
The most sophisticated ML system in the world will fail if the people who
need to use it don't understand it are trusted or see the value in it.
There has to be a mixture of both top down and bottom up approach.
Now, when you're getting ML ops right in the market industry, the transformation
is going to be dramatic and measurable.
Now, loan processing times will drop from weeks to days.
Accuracy will improve significantly.
Your operational cost will decrease substantially, and regulatory compliance
becomes automated and bulletproof.
And most importantly, the entire home buying process will become something
customers actually enjoy rather than enduring for weeks or months.
Now, the mortgage industry has been essentially unchanging for decades, but
ML ops will give the, give us the tools and techniques to transform it completely.
The.
Organizations that master this transformation, especially the first
one, to master them will dominate the next set of financial services.
The technology exists, the techniques are proven, and the
business cases also company.
Now.
It is just a matter of execution.
Now, I also want to thank you all for your time today.
I am excited to continue.
I am excited to continue to have these conversations and also help
more organizations successfully apply mops principles and transform
their industries successfully.
Now, let's align, adapt, and accelerate for a breakthrough transformation.
Thank you.