Conf42 Incident Management 2025 - Online

- premiere 5PM GMT

Cloud Modernization Meets Compliance: Resilience and Speed in Financial Services

Video size:

Abstract

Discover how top financial firms turn regulatory complexity into a competitive edge. Learn how embedded compliance in cloud architectures boosts resilience, slashes latency, and accelerates innovation turning incidents into opportunities for growth and differentiation.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi, this is Niro. Thanks for being here. This talks about one practical idea, which is to make compliance a built-in feature of how we design and run systems. So in financial services the scale is huge and trillions of dollars move every day. So when a system pauses even for a moment real people feel it and the cost cleanse fast. So our aim is simple. A system that rarely fails and when they do, they recover quickly and leave a clean trail of evidence as they go. So that way teams moves faster with confidence and customers feel safe. The proof we need is created as part of normal work. So if that's the goal, let's round it to the world we all live in. The financial services challenge. So here's the reality. So millions touch real balances and real trust. People expect services to be there whenever they need them. So threats move fast and patients is shot. And when something goes wrong the cost shows up in both money and reputation. That, that's why instant management cannot sit on the sidelines. It sits where reliability, regulation and brand meet. That instant is not only technical, it can also be business and regulatory. Our methods must respect all three at once. So how do we make a safe path, also an easy path? So that's where the the modernization helps. So why cloud modernization now modernization is not only go faster. It is set better defaults. So when the safe path is also the fast path teams stop negotiating and start building. So we use. Repeatable patterns for networks, identity storage and compute. We put clear rules in the build and release steps. So risky changes never reach customers. We keep unchangeable logs so the system can tell its own story later. The result is fewer surprises and calm our operations. Quick proof from the field. Then we'll keep moving. So we moved to a fragile service into a landing zone with strong guard race, same team, same code, the next instant did not feel like a fire drill gone wrong. It felt like a drill we had already practiced. We knew what changed. When it changed and the safe path back that's the power of good defaults. Now let's talk about compliance as a built in advantage. So think of compliance like seed belts designed into a performance car. Now the car still moves fast, but safety is part of the experience from the start. For us, that means approved reference designs reusable building blocks and evidence that is created automatically during day-to-day work. So reviews get shorter and then rework drops and releases become more, more predictable. Customers notice our stability and regulators notice clarity, and then leaders see steady delivery. And this isn't extra paperwork. It's the system proving itself while we built it. If compliance is a feature what rules are we encoding? Let's let's map the let landscape here. So rules are a map of what society cares about. Some focus on accurate reporting and fair markets. Someone privacy and proper use of data. Someone banking safety and open payments. Newer ones focus on operational resilience and responsible use of learning systems like ai. We do not need to memorize like every chapter return. Each team into code and steps label and product sensitive data from the staff plan, fail and practice them and keep records. Person outside the team can understand in, in minutes. With the clear data let's look at how the how in our architecture. So let's take a look at the hybrid and multi-cloud strategies. Architecture is where promises become real, right? So we manage policy and visibility across the company while keeping sensitive customer data in the region where it belongs that respects data rules without losing control. We designed for small pla radius clear boundaries between services and sensible timeouts and retries and fast ways to switch off risky features. So we practice failovers until they then become boring. For the most critical systems we run so living more than one place. For others we keep a warm backup that can take off quickly. So one picture to lock it in, then we'll deliver it to be like, gonna dive into a, to a two examples, right? So think of a highway. Each service has its own lane with dividers and and the shoulder. If a tire blows in one lane, the road doesn't shut down. Those drivers and shoulders are your boundaries. Timeouts. Retries and feature switches. Now three short stories that kind of shows this in action, right? So regulatory reporting, reporting move from, or batches to real time streams instead of finding delays tomorrow we saw them in minutes. Each step. Wrote an unchangeable record. So the audit story was already there. So reports that took like days random minutes audit prep that took weeks took days. So when a delay happened, we could point to the exact stage that can slow down or show what was affected and proven recovery. As finished. So the accuracy went up and the cost went down and the trust. Trust improved. Alright, let's take a look at the next case study. So basically we wanted to stop bad transactions without bothering good customers. The turning point was managing the whole model lifecycle not the score. So the data is data, it uses. Nicholas stayed fresh, so any change needed the right approval. So if a new version misbehaved, we could return to the last version. Just one, one decision. And we could explain what changed and why. So file alarm dropped, losses dropped, and investigators chased better signals. We have another example. So last example here. This is like speed and discipline and trading. So this case study. So surveillance moved from after the fact to real time and limits were enforced as orders flowed. So unusual patterns were flat as so the records of this checks traveled side by side with the with the trades. So in a market search, the system did not ask for a meeting. It followed a safe playbook. So incidents were shorter and reviews were clearer and confidence from the business and from regulators improved. So following these three case studies, so what makes these wins? Repeatable? Foundation of all these is on the on the next slide infrastructure as. Write down how environments must look and let the platform build them from that description. So new plat, new environments run aligned to standards. Identities set correctly. Networks are segmented the right way. Storage is locked as intended. Changes happen through a clear view instead of one-off tweets if something goes wrong. Returning to a good known state is quick because a platform knows what good looks like. The rules sit next to the configuration and the application. What changed and why, and all question. Move faster and auditor. See the exact history. Now, let's put safety rules into a daily. Instead of quarterly checklist we place rules directly into the build and release steps basically, and into the platform. If a setting would expose a data or open a risky path it does not pass. The system explains which rule was violated and how, and shows the approved pattern, right? This turns long review meetings with fast, clear feedback while the work was still in progress. Most issues are minutes. Platform also creates detailed timestamp record of what's fired and who changed what and how it was fixed. These kind of records is like goal for audits and for learning after the instance. So let's tackle this in daily habits. So bring six earlier in the development when fixes and. Classified data and applied privacy controls by default. So sensitive information is handled correctly without traffic, keep logs in a way that cannot be quietly rewritten. Rehearse resilience like a fire drill or failover and emergency moves. Feel familiar and not scared. Product feels solid to customers and the platform feels same to operate. Launches are smoother and incidents are fewer time spent proving compliance shifts towards improving the customers. So even things break motion. So the instant management and compliance. So basically just keep it steady practice. We practice it all. Let's keep it steady. Confirm the signal and decide how serious it's start a clean log right away. Pause. Risky change paths. Use free pre-approved moves, like switching off a feature or limiting a traffic or failing failing out to a safe location. Communicate in plain language to right people. So while you work let the system collect facts current settings, recent changes, who had access key measurements on the trace of what you know, what and who, et cetera. And don't call it fixed until the service holds steady for a meaningful window. This protects customers and creates evidence to explain what happened and why the fixed work what does this bias, what does this actually buy us? So let's talk about the outcomes, the actual outcome or the business impact. So when safety and compliance are. Part of the platform. New environments come online quickly and predictably. Recovery is faster and less dramatic. Conversations with regulators are clearer because your evidence is produced as part of normal work. Customers notice reliability, leaders see momentum, and every incident becomes a chance to improve our platform itself. After the fix works, promoted to the shared template, so the next team never meets the same problem. So over time speed, reliability and audit readiness race together instead of, fighting with each other. So what should you do? Say next week, right? Start with one visible win. Stick a single painful control that burns time on fiction. Automate it from end to end. Place it inside a golden path that teams can use right away. Make the path the fastest way to production, not just the most compliant with, give the path and owner at a short public list of improvements. Each month measure a few signals of how often you make your service promise and how long it takes to recover, how many changes or struggle, and how many hours you save in audit work. Share the trend so people can see progress and repeat it. The next high value control. It's really that simple. Okay, that brings us to the end of a presentation. Remember, reliability and compliance are not enemies. When we build safety into the path of production and practice, our response incidents become rarer, shorter and safer. Story you tell regulators become the same story you tell engineers because both comes straight from the system. Every time you improve the shared template, you give a gift to the next team and to the next customer. Let's make trust or advantage and.
...

Nirup Baer

Program Manager @ Wells Fargo

Nirup Baer's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content