Conf42 Observability 2025 - Online

- premiere 5PM GMT

From Blind Spots to Insights: How Observable Identity Systems Transform Security and Performance

Video size:

Abstract

Discover how observable identity systems are revolutionizing security! Learn how leading insurers achieve 87% faster threat detection, 74% quicker incident response, and near-perfect fraud prevention through instrumented biometrics and AI analytics.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi everyone, this is Sheika. I work for Guidewire Software. I'm a project manager there. I've been with Guidewire. I've been working in the property and casualty industry for the last 10 years, and today I'm here to talk about how observable systems have. These cloud-based identity systems today let you see exactly what's happening to your identity checks. And they do this by using smart checking of the biometrics. Maybe it's fingerprinting, facial recognition or voice recognition. There's also AI involved that analyzes the process to identify any issues and improvements. And then we have blockchain technology to ensure everything is transparent and it's secure. The observable systems help the team identify issues much more quickly, and that reduces the time. Time taken to identify a problem by 87%. That means the systems have less downtime. We have quicker resolution and response, and we have better user experience, which adds to system reliability and resilience. So observability observable identity systems give realtime insight into the entire verification process at the code. These systems use instrumentation, meaning they capture important data points as users go through the verification journey. This could include biometrics, system response time, or error patterns. And all this data is then fed into the monitoring and alerting tools that help teams detect issues like a failed verification or as suspicious login. And as they, and we can record it, we can get to know about this in real time rather than waiting for hours and getting to know it about hours or days later. And we don't stop there. There's also a technical layer involved. These systems also translate that raw data into meaningful business insights to help insurers make smart decisions around security, process improvements and customer experience. For example, let's say insurers can use this information to spot security risk early on. It helps us to modify, oh. Optimize. It helps an insurance company to optimize the customer journey by removing bottlenecks. It also justifies investments in better tools or infrastructure based on hard data. In short, we can definitely say that the observability turns what used to be a black box into a clear process, driven in a data driven process that improved customer experience. Modern. These this modern observability has taken fraud detection to a really next level. Unlike traditional monitoring, these systems can now detect even sophisticated swooping attacks like fake IDs or deep fakes with 99.99% accuracy. And they do it in near. In a real time. In real time. This is a huge leap forward giving businesses, including insurance companies, a much stronger protection against fraud and reducing the time it takes to respond to threats. Moving on to operational improvements, to observability observable. I systems don't just improve security, but they also offer a lot of. Operational benefits by giving systems, giving teams deep visibility into system performance to help spot and fix issues before it affects customers. And this includes capabilities like transaction tracing, which helps identify failures. Early on and helps to understand why the slowdowns are happening. It also helps us, helps in capacity planning to anticipate and prepare for usage spikes, and also supports continuous improvement by turning system data into actionable insights. And in short, they help operations run smoother, faster, and more in a more proactive fashion. Dispute tracing lets a user track a single request as it moves to the multiple web services, multiple microservices involved in the verification process. And this helps, one, understand how each part of the system is performing and where delays of failures accurate. Let's go through an example here. Let's say the ex, let's go through an identity verification example here. Imagine a user is going through an online identity verification process. Here's how distributed tracing works across the different microservices in what? First they'll use a document, upload servers. So the user uploads, the IT document and tracing captures how long the document took and whether the files were successfully received or there were any errors. If there were errors, it also captures the logs. Then we use the document verification validation service. The document is then scanned, is now scanned for authenticity, expiration, and tampering. At this step, the tracing, so shows if the validation passed or failed and how much processing it took. Then we use the biometric matching service Here, the user takes a selfie for face match and tracing logs, the image capture comparison time, and the confidence score of the match. The full service is the data verification service. The personal info is verified against a third party sources. For example, it could be government databases, it could be credit bureaus, et cetera. Here, the tracing records, each external call and how long they took, helping spot third party delays. Last, but the not the least is the risk assessment engine and all the results are fed into a risk engine that calculates a trust score. Here tracing shows how data flowed in, how rules were applied, and if any step triggered risk flag. Now you must be wondering why it matters. It matters because with distributed tracing, we can pinpoint slow services. Let's say if the biometric matching is consistently lagging, it can help us identify that. It can also spot failures early on. For example, if the document validation service is crashing into intermittently, it'll it'll help us to identify that it also helps improve performance and also deliver a smoother customer experience by solving issues before users are impacted by it. Moving on to the advanced correlation techniques for identity events, advanced corelation techniques help. Connect related events across different identity verification steps. Turning individual data points into a clear actionable story. Let's take a real world example. Let's say a user is going through verification. The document scan is successful, the biometric matches slightly off, and the database check returns. Inconsistent data alone, each event might seem fine, but. Or just slightly usual or slightly unusual, but when correlated or connected together, they could indicate a spoofing attempt or an identity fraud. So how it works, correlation engine connects events from across the system, document scanning, biometrics, checks, backend validations, et cetera, using machine learning. They built normal behavioral patterns. Example, time taken, device type and match scores. When patterns deviate even slightly, the system flags it early for review. Why it matters, it helps detect complex fraud patterns that would be invisible if each component were reviewed In isolation, it reduces FA reduces false positives by understanding context. Enables proactive proactive action, stopping problems before they escalate or affect users. There's also behavioral there's also behavioral based anomaly detection. So behavioral based anomaly detection is about spotting unusual user behavior that could signal fraud or security threats even when credentials are correct, how it works. First we establish the behavioral baselines. The system learns what normal looks like for each user or a group such as typical logging times usual. Devices, locations, common pattern and document uploads and biometric processes. Then we apply machine learning tools to it. These tools monitor user activity in real time and compare it to the established bake line, looking for subtle deviations faster than normal form. Repeated, failed biometric attempts, loggings from unusual locations and devices. After that, we calculate the risk scores. Each action is scored based on how risky it seems. The more behavior devis from the norm, the higher the risk score then will trigger adaptive authentication based on the risk scores. So the risk goes. Across a certain threshold, the system can ask for additional verification. Like in OTP flag, the session for manual review or block the action entirely looks too risky. Why? It matters because behavioral based detect goes beyond static rules or password checks. It helps catch sophisticated attacks by focusing on how users behave and not just what credentials they use. But with all this information, which we, behavioral information which we capture, we also need to take care of. Privacy. So implementing observable identity systems means walking a fine line between comprehensive monitoring and user privacy protection. The key points here, which we need to take care of is the comprehensive monitoring needs to ensure strong security and smooth operations system need to capture detailed telemetry like user activities, system performance, and authentication behavior. We need to take into account privacy ations. However, this data often includes sensitive user information, to limit what's collected, to only what's necessary and anonymize or encrypt personal data, we need to have a balanced approach. The goal is to find the right balance, enabling visibility for fraud detection and troubleshooting without overstepping privacy businesses. This includes being transparent about what data is collected, providing controls and opts applicable, and designing systems with privacy. By default. If we can say a successful identity, observability strategy should deliver security and insight, but also earning gives a trust through responsible data practices. Now let's talk about the roadmap for. The implementation of these verification systems, implementing and observable identity systems is a phase journey. It's just not a one-time setup. It evolves across five key stages. First is the assessment stage. We start by evaluating the current capabilities, what telemetry already being collected, where are the blind spots in IDD verification, and are there any tools and gaps in monitoring? The idea here is to establish a clear baseline and identify quick wins. Next will be the instrumentation. Now we add instrumentation to capture detailed data across identity workflows. Example, document upload, biometric mash, third party verification, and risk scoring. We need to make sure we make all steps traceable with real time data points. Then we do the integration. We introduce automated alerts and adaptive responses, high risk behavior triggers, setup, authentication, system error, notify relevant teams. Instantly. Instantly. The idea has to reduce manual intervention and improve response times. It's the refinement time, the refinement step. We need to continuously improve, and we can do that by tuning machine learning models for behavioral based detection. Updating baselines as user behavior, evolve and expanding visibility to new components or channels. The goal here is to optimize accuracy, reduce false positives, and support scalability. The full implementation can take any time between six months to 12 months, but if we try the incremental approach, we can start getting the start utilizing the benefits within two to. The key takeaways and next steps. The first is the measurable benefits 87, 80 7% reduction in meantime to reduction shows real power of observable systems, enhanced data detection accuracy through advanced techniques like behavioral analysis and event go relation operational gains to transaction tracing, capacity planning, and improvement transforms raw. Telemetry into business intelligence to better security and process decisions. We need to make sure that we have a right balance, privacy balance. Effective observability requires a careful balance between monitoring and data production systems aligned by privacy, with privacy by default, respecting user data while ensuring security, anonymization transparency and regulatory compliance are non-negotiable. Mean to use a holistic approach, need to combine instrumentation monitoring, machine learning based analytics, and adaptive authentication across the identity lifecycle. Need to support distributed tracing across microservices. The journey we need to have a strategic journey. Implementation is phased, begins with assessment that moves to instrumentation integration automation, definement. The full rollout takes six to 12 months, but we can see the benefits within two to three months itself if we use the incremental approach. Thanks everyone for having, that's all I had for today. Thanks everyone for listening to this presentation.
...

Shikha Gurjar

Technical Project Manager, Essential Services @ Guidewire Software

Shikha Gurjar's LinkedIn account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)