Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Welcome to conference 42.
My name is AU and I am a lead Guidewire developer at Marque Technology Solutions.
It's an honor to be here.
Building high performance financials with, I'm going talk about claims
management scale with high performance.
Financial is great for a, it allows each developing or generating open
A PA documents and clients as well.
Both inbound and outbound services actually is not a direct
programming language, but involves the claims management if it comes
to administration and management.
Rush
Financial.
Handling annual.
So the agenda, so what is, how impacts on financial services when it comes to
the impact on financial services, the main financial service for imperative?
So understanding the unique challenges of building mission critical
financial APIs, and while legacy.
Approaches fall shut.
So when it comes to legacy in-house projects, there are a lot of complexities
when to fit into the digital world.
So to fulfill this one, so Rashti is introduced high perform microservices
and APIs so that we can achieve.
Key advantages for financial applications exploring memory safety.
So here when you're creating the objects and mal the.
Collection so it'll be more safety that it'll out reving from both directions
and obstruction once everywhere.
We're using the same so that it'll give obstruction and
fearless con in financial context.
So when it comes, like earlier, we are having.
Concurrent exceptions with multi parallel tradings.
So every time it'll get deadlock.
So this, we are going to pay like the fearless con in financial context, so
you won't get such type of complexities when it comes to real performance
metrics and case studies with rush examining concrete outcomes from
rush forward claims management system crossing three $50 billion annually.
Yeah.
So when it comes to implementation partners and integration strategies
but practical approaches for building and developing Rush financial APAs in
production environments when it comes to the security and compliance and
future directions rush is giving clear man clear path to address financial
specific security requirements and for.
So it is giving the clear path so that we'll get the security
as well as the future directions.
Yes, we discussed what is the imperative in the performance perspective.
So we already discover, we already in the last page, we have seen that it'll give
the, zero complexity when it comes to the threat safety and the speed requirement.
So today's world, everyone is follow the agile methodologies so that these rushed
knowledge, support, like if you see any product we are getting with the libraries.
It has Yes.
Skeleton.
So we are going to use that as a customization.
So that like in case of speed require modern customers expect
instant claim decisions and immediate fund access while companies.
Operations, so especially in the claims management to process.
Pending approval.
From there, it'll go awaiting submission.
From there, it'll go for issues, then it'll go to the clearance.
So the process, it'll take at least three to 10 business days.
Now this one with rush, the speed requirements in the sense like
subsecond best ones we're receiving, of course, like the clearance
and all eight through batches.
It'll happen.
So probably we'll get in a day or shorter.
But when it comes to the security imperative, trust systems handling
billions in transactions required bullet through protection against
increasingly sophisticated attacks, targets, memory vulnerabilities.
So with, especially if you're taking insurance domain.
So we are reading the security.
Is o for example, if you're doing any change in the claim or if you're adding
some coverage or modifying something in the fields of exposure or in the process
of check immediately the change will go to the is o from the ISO, again, there
are s like Lexi and other services so that we'll get that if fraudulent claim
response, we we receive that response.
We can say like security.
That means we had security wise, good performance.
Yeah, so reliability mandates, so fi financials must 99.99%.
Sometimes we can say hundred percent, but in real world, up to some 99.
So uptime by crossing millions of daily transactions with perfect
data across distributed systems if something is getting failing.
So entire world is faced problems.
We have seen that recently.
And their emergency, all rights are delayed.
So these type of things in the financial sector with rush
to, it was never happened.
Scale challenges, modern claim systems must handle, as I said, like 26
million plus daily results request.
While maintaining the sub hundred milliseconds response times
across global infrastructure.
So all the legacy systems they have implemented with core
technologies like Java, net, Python, they struggle a lot of issues.
And sometimes they're not meeting their demands without significant oring.
Creating cost inefficiencies and reliability gaps that directly
impact business outcomes.
Financial institutions need a fundamentally different approach
to APA development, so why rushed for financial services?
As I said rush in any combination performance and safety guarantees that
may makes it exceptionally well suited for mission critical financial applications.
Eliminate classes of memory error without performance, preventing workflows and
user after vulnerabilities systems in.
C plus there we have introduced object oriented program.
That means usually the garbage collection whenever we're performing
small operation within the method, when you go to the control there
intentionally it'll create the object.
So it'll fill once, it'll fulfill that business logic, immediately,
it'll get off from that method.
The, all the variables which are used inside method is G collected.
It'll get freed the memory, so that memory will be used upcoming.
But the next statement, execution.
There we are creating the new object.
Those objects will again use and the same process will follow.
And the zero cost obstruction.
If you see as I said we're having like in case of claims or policy
or billing or contact manager.
So there is couple of products, like especially one of the leading product
is Guidewire and certainly we have.
Some other competitors.
But here they're providing the zero cost abstraction.
That means everything they have defined, so we need to customize the
code, what they have provided with that typecast so we won't get more issues.
That's what we're saying zero cost.
Even if you're customizing.
Finally it has to go the so type, so that time we won't get any
issues like the compatibility.
Issues.
So when it comes to the abstractions, I can build high level business
logic without performance.
So taking any performance so we can get good performance abstraction
con, as I said, like there is no dead in the performance issues and
no hanging out the application.
We subsecond response for each and every request.
So even if you're 26 million of request every day.
So without concurrent issues, we can Yeah.
Driven development complex rules in the system, making invalid
states and representable.
Error before deployment.
So most of the tribe day one things are handled at the time.
If you're expecting the application kind of errors, those things, they're making it
as invalid and the forcing has requested.
So to perform the correctly.
Of safety and for whom has led adoption by major financial institutions
for mission critical a P Systems.
So what is the measurable impact on performance metrics?
So now if you see that like the crossing time, it's rejected from the Lexi system.
So the digital world with rushed, it's almost 85% reduction that time
so that we'll get the performance, yes Subsecond restaurant everywhere.
Than 26 million gateway.
26 million daily request with consistent of 90 90 millisecond.
That means across global operations, so system that means
the performance is giving earlier.
It is.
99.7 to 99 point below 99.74 earlier, now it's more to 1990 9.9.
Sometimes most of the times achieving the results.
Representing a reduction of 14.9 downtime.
Financial that'll.
And I think there now if we see all the virtual missions came into the picture
significantly, lawyer, CPU and memory requirement have 40% infrastructure
cost saving while improving performance.
These metrics comes from aggregated data across multiple insurance major insurance
carriers and financial institutions that have implemented rush based claims
crossing system the past three years.
The consistent pattern of improvement demonstrates that trust for.
Translate directly to business value in financial context.
So what are the challenges when you, as you said, like the fraudulent
claims are fraud fraudulent request we can easily achieve with.
A major insurance carrier needed to analyze one 40 data points for claim
across 810,000 daily fraud signals while mening the sub hundred millisecond
response time to enable claim.
So once he created the claim, so it'll go checks like every.
Checkpoints, there are one 40 checkpoints.
Yeah, all these one 40 checkpoint, it's crossing within hundred milliseconds.
So what is the rushed solution implemented as synchronous.
The synchronous rod detection means, so whenever you are going.
The transaction.
So irrespective of the services, it'll go and complete that transaction.
Meanwhile, whatever service is called, it'll get updated as a synchronously.
So rush synchronous waiting with the Tokyo process, multiple
fraud signals concurrently, which maintaining strict memory bots.
We should not lead memory leaks.
For that we are that competency copy DC realization with reduced or passing most
of the things like it's native language.
So we'll get custom memory pool estimated.
What is the results if we're solution?
We're so 99.96.
Fraud checks completed under 80 milliseconds.
Why we're missing 0.04.
So every day the hackers are coming with the different techniques and technologies.
It a challenge for the major carry insurance carriers.
So every time they're coming with different religious to work on the,
with the hacker ideas and techniques.
The false rate reduction by 31% through more sophisticated algorithms
enabled by performance every day.
The data analytics, their different logic to get the more throughput.
So that'll go the performance give the better performance.
Yeah.
System handles times traffic spike without degradation.
70% straight throughput crossing rate for eligible claims.
So the strip rules may mainly play, will play the major roles
in case of claims applications.
So when it comes to the memory management, so each and everywhere,
we have the memory related errors.
And unpredictable latency specs to overcome these.
So we have to follow the techniques, like we have to clean up the garbage
collections and runtime so that we can achieve this latency issues.
And when it comes to the safe obstruction.
So always we have to use the loosely coupling so that we
can get the safe obstruction.
So memory, safety impact on finance system in traditional collected
language like Java financial systems experience, unpredictable la latency
spikes during collection cycles, c plus systems risk memory, corruptions
that can lead to catastrophic failures that eliminates both problems through
compiled them, enforcement of memory, correctness without runtime overhead.
So this rushed, all things are eliminated during the comp time itself.
So the predictable for performance are shown in the diagram characteristics.
Russia allowed us to eliminate the 19 9.9% latency spike that were causing
transaction time during peak periods.
So when it comes to the bundle.
Commit transaction during sot.
So how we can say the Fearless Conference, as I said the Golden Systems in the
legacy applications we are having, sometimes we have to wait for more than
10 minutes to complete the transactions.
So some people, they used to do multiple times to complete the transac.
And if it is not succeed, they can.
They'll come for the next day and they'll try.
But when it comes to the rush, so we have, we're seeing that second response,
like the thousands of concurrent request, while rendering data consistent across
shared resources, there is approach force and unacceptable tradeoff.
So what are the problems with the traditional approaches?
If you go with the course, grands safe, but creates water next when it comes to
the fine logs, data performance, but risk deadlocks, lock free algorithms I used.
For means, but extremely how we are gonna achieve solution.
We're using the type, level threat safety, so the compiler
prevent data, so compilation.
The one which is bottleneck in the traditional approaches.
And here we're gonna send, so whenever it's required, depend.
So always we'll go with so exclude threat safety guarantees.
Guarantees.
That means concurrent threats.
It won't involve, and we can achieve expected results within the span of time.
So it's not dependency.
We'll go for the either, I think and wait.
That means it.
No need to wait for that one.
Once we'll get the response, automatically, it'll get
updated into the system.
So it'll like, through the through we're going get the high
performance multitasking as well.
Safe communication between processing stages.
So with these approaches, whatever traditional approach, bottlenecks
and deadlocks, all these things we can achieve through trust solutions
in production, financial systems.
Con crossing throughout improvements.
Three.
Ecosystem for financial, a ecosystem for financial, a ecosystems set of libraries,
frameworks, especially beneficial for building financial services applications.
So I think as in are seeing the high performance if we see that the Tokyo
product, great synchronous runtime, enabling thousands of concurrent.
Connections with the minimal overhead used by 92% of financial rush application to
handle high throughput, a PA workloads, and if you're using a P frameworks.
So it'll the cost and also fraud.
We can easily.
Yeah, tested web framework with comprehensive middleware ecosystems
both offer performance for ex traditional offering metrics and so
each and every step we have metrics and monitoring, so when we can
easily trace where the exact, the performance, the tracing, we can achieve.
Expected performance.
So when it comes to the comprehensive instrumentation, libraries for production
systems, open telemetry example, integration enables seamless monitoring,
distributed financial transactions, in case some database access when it
comes to the database, access each and every table view you or we are
reading the indexing so that we can get.
Like earlier like the response, we are getting more than 10 seconds
with this index and queries.
So we are getting the subsecond response, the serialization.
For security purpose, each and everything, we serializing.
And sending the over the network.
And in case of that means you could not able to the one which is shared
with the serialization security.
So we need memory crypto graphic libraries providing high performance
TLS implementation critical for security financing data in
transit with the minimal overhead.
Implementation pattern for error handling, financial error handling in financial
system, unique challenges, error.
Yeah, as we discussed earlier, so these errors, most of the 99.9% errors, we
are handling it compiled by itself.
So that remaining 1%.
So with the, with ENT programming, with the rush, we're eliminating it.
So what is the impact on production systems?
So financial institutions using rush terror handling patents report.
So 99.92% system availability under 94% direction.
In unhandled exceptions, improve error enables.
Minutes enhanced complaints through comprehensive error added.
So what are the integration strategies?
Adapting, rush incrementally.
Most financial institution can't rewrite entire system advance.
So we see in the market all the ready.
So we, the system automatically we can integrate the the hundred
percent reliability software we can directly, we can integrate
with the as for our requirement.
So these proven integration patterns enables incredible
adoption and performance.
Critical micro, if you see the each and every program
we're using the microservices.
Better performance.
Identify impact in existing systems and replace with the targeted microservices.
Common candidates include fraud detection, payment crossing, and real time.
This approach delivered 70% crossing time improvement at a major.
Disrupting core systems.
So the a p gateway, transform, implement, gateway rate limiting and protocol.
This pattern reduced latency by 60 60% for global payments while adding minimal.
Existing system.
The gateway became the foundation for other rush adoption and foreign
function interface exposed functionality to existing Java application through
foreign function interface bin.
This approach enables replacement of performance critical components
while rendering compatibility.
With existing systems one bank through transaction, through three,
3.2 times using this approach, their Java core banking system.
If you see the loan systems earlier they started using that systems data
in between if they're changing some requirement, so entire process, again,
they have to start from the beginning.
So to achieve that one to work on that problem we have.
When you're in need, we can win that one and we can complete
within the, it's it model.
So even also it'll follows that model so that we can achieve expected results
and in that time, and the product will be in the market, having the.
So it comes to security.
Security advantages that directly address financial industry requirements.
Those are memory safety guarantees.
So we already seen that memory safety, like when you're or storing
the data so it always hundred percent can achieve without any dispatch.
So fall compilation.
Foreign false interface boundaries.
When interfacing with legacy systems rush provide safe obstructions or unsafe
code containing potential vulnerabilities to well-defined boundaries.
This pattern has proven crucial, secure incremental adoption.
I.
And data isolation requirement, making it easier, multitenant.
So finance, treatment, face unique security challenges like compliance.
We have different standards like DS d and multi talent data isolation requirement.
It has all, all these regulations and sophisticated targeting financial data.
So like the right time right product when it comes to the
market, only the product having the business value delivers measurable.
Business impact through a unique combination of for and safety guarantees,
85% faster processing and 99.991 percentage uptime and 40% infrastructure,
cost reduction and incremental adoption in practical, so financial
institutions can adopt through gateway.
FFI without risky System S, ecosystem maturity provides production ready
libraries for all critical financial API needs from synchronous into secure
database, as I said, the market.
So that skill and the whatever, like 99% are providing the skeleton as libraries.
So we have just customized in the financial.
Systems.
Systems, the what are the next steps in the rush?
So identify high impact performance bottlenecks in the existing
system as initial rush target.
Start with small bounded micros to build team expertise so that we can
achieve the performance leverage.
Rushed memory safety advantages for security.