Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi good morning everyone.
I'm Gpro Oli from G two Solutions and it's a pleasure to join you today at Con 42.
Let's talk about something most financial organizations fear,
but clearly rarely see coming.
Systemic risk, hiding in plain sight.
Our topic, observing the invisible cloud native observability for
real-time financial risk detection.
In the session, I'll walk you through how we turn bankruptcy monitoring once
a slow reactive black box process into a transparent cloud native pipeline
that delivers real-time insights.
Measurable, ROI and audit ready compliance.
Risk reduction system architecture.
We be, we began by reimagining our entire architecture.
We ingest data from multiple sources, legal filings, credit scores,
transactional databases, and third party feeds, but raw ingestion isn't enough.
We built a serverless ingestion layer using cloud native event
driven functions that normalize and processes the data as it arrives.
From there, Apache Spark takes over.
It performs complex entity resolution, matching data names, identifiers,
and even patterns across the data sets to generate a risk scores.
Apache Airflow orchestrates this entire pipeline managing
dependencies and execution schedules.
What sets the system apart is embedded observability.
Unified logging, distributed tracing, and real-time metrics are foundational.
We monitor not just for failures, but for risk, signal quality, pipeline health,
and regulatory thresholds in real time.
The visibility crisis.
Before this transformation, we faced what I call a visibility crisis.
Legacy systems were siloed and manual.
When things broke, they did.
We couldn't trace the root cause.
Risk signals took hours or days to surface.
Compliance teams spent weeks teaching together.
Audit trail that were never met.
We reconstructed, and every blind spot was opportunity for financial loss.
We weren't just lacking the observability.
We were operating in darkness.
That crisis forced a mindset.
Mindset shift.
Observability couldn't be a bolt-on.
It had to be built in
our observability approach.
So we built observability in from day one, full stack inability from
ingestion to orchestration to alerting.
Distributed tracing using open telemetry to track every transaction
from entry point to risk score.
Internet logging, not just technical data, but logs and raised with businesses.
Context like case type, risk level, and client integrated metrics like we tied
system KPIs like latency and throughput to business outcomes like detection,
accuracy, and compliance Completeness.
This latest bridge engineering and compliance, both now
speak a shared language,
observable serverless architecture.
Let's dig into architectural observability layers, ingestion layer.
We capture quality metrics for every document, format, issues
missing fields, delays in real time.
Follows part, workflows every transformation is traced, enabling us
to monitor matching precision, detect bottlenecks, and measure scoring contents,
airflow pipelines, each step logs, validation status, error categories,
retry rates, and regulatory checkpoints.
This isn't just observability for uptime, it's observability for trust.
Knowing the system works and.
Understanding why it works.
Next comes our trans transformation results.
Here is what we achieved.
We reduced average processing time by 78, percentage from two
hours, 40 minutes to 31 minutes.
Improved detection accuracy by 92% risk alert now reach decision makers
in under 45 seconds and throughput.
Three 50% with 99.97 up 10.
These variant vanity metrics, each one translate into less risk exposure, faster
interventions and lower compliance costs
with reduction through visibility.
One of our biggest wins.
Error reduction false positive dropped by 87%.
False negatives, the most dangerous ones fell by 94%.
Time orders and data mismatches are now detected and resolved
in the early, early in the flow.
This has slashed wasted analyst hours and prevented missed bankruptcies,
both, which directly protect our clients and reduce operational burden.
Building observable components we didn't get here by chance.
We established a unified logging strategy, structured log echo service services, and
reached with client and risk metadata.
We deployed end-to-end tracing with open telemetry.
Now every match and transformation is traceable.
We implemented multidimensional metrics, not just system health, but kps like
documentation, completeness and detection, LA latency and match conference.
Now engineering and complex compliance teams.
Sharehold shared dashboards, share alerts, solve problems together.
Here comes the compliance transformation.
Client compliance has senior radical shift.
The documentation completeness is 97%.
Audit prep time down by 71%.
Evidence is 89% faster.
Audits that took weeks now take hours.
Every risk signal has stressful path.
Every regulatory inquiry has structured reliable evidence behind it.
This level of visibility doesn't just reduce risk.
It builds credibility with regulators and stakeholders alike.
Dashboard design principles, dashboards are more than just charts.
You know that, right?
Executives get KPIs and financial impact.
Engineers get deep dives into task failures and latencies.
Compliance team get traceability and alert blocks.
We apply business context integration.
Every technical metric is paired with your business indicator, and
we use historical correlation tools, identifying patterns, emerging threats,
and opportunities for prevention.
Before issues reoccur
our implementation roadmap.
Our rollout followed a clear roadmap.
First, the telemetry gap assessment mapping what was missing across
tech and business layers.
Second instrumentation, adding, logging, tracing, and metrics across
serverless park and airflow dashboarding.
Designing role specific views with stakeholder feedback.
Next enablement training teams, creating playbooks and embedding
observability into our cultures.
Each step delivered immediate value while building toward fully
observable ret resilient pipeline.
Key takeaways and looking ahead to wrap up here are four principles we believe in.
Designed for business context metrics should matter to your mission.
Built it in not on observability should be part of your architecture.
Enable end-to-end visibility across all layers from ingestion to alert,
quantify the value, so how it improves risk detection and compliance speed.
Looking ahead, we are exploring AI augmented observability, using machine
learning to detect hidden risk signals and optimize compliance at scale.
Thank you for your time.
I'm happy to take your questions or connect afterwards to discuss how
observability can reshape your financial risk systems just like it did two hours.
Thank you so much.