Conf42 Kube Native 2025 - Online

- premiere 5PM GMT

Securing AI-Driven Finance: Navigating Risks in Cloud-Native Modernization

Video size:

Abstract

Discover how AI is transforming finance and why cloud-native adoption brings hidden risks. Learn real-world attack vectors, cutting-edge defenses like XAI and anomaly detection, and a practical roadmap to secure AI-driven systems without slowing innovation.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello everyone. I hope you are enjoying the conference so far. Let me introduce myself. I am Manisha San Gupta and I have spent around 18 years in the world of technology, of which more than a decade, I work at the intersection of finance and technology modernization, helping organizations mostly modernize their systems while still keeping trust and risk as their top priority. Today I'll be covering my topic, securing AI driven finance. Before I start, lemme ask a question. Have you ever heard about this? A popular smartphone smart home app that use of AI in the cloud to detect unusual activities like, notifying you if someone enters your house, such as. To make the system scalable, the AI model runs on a cloud platform and communicates over public APIs. One day, a user unknowingly exposes their API key by syncing with a third party app. And guess what? A hacker discovers it and starts sending queries to this a IP learning patterns about how the home is empty, how and when the home is empty. Now what happens, et cetera. Suddenly this very technology that's meant to protect the user becomes a tool for inclusion, all because cloud native AI wasn't secure enough end to end. Now, when I thought about this, it hit me. The real threat isn't always the big obvious hack. It's the quiet patterns that we often overlook. As financial institutions, we experienced world revolution in modern world, while AI systems now power fraud, detections, credit scoring, trading, customer engagement, et cetera. At the same time, cloud native architectures like containers and microservices are replacing our traditional systems. This creates opportunities, no doubt, but also raises risks like financial losses, reputational damage, regulational penalties and whatnot. Evolution of AI in finance has come a long way. Detections sophisticated machine learning models now analyzing millions of transactions per second. Enhanced capabilities learning are powering personal services in rapid speed and functionality out basis. Guess what? Security. And then that leaves us with blind spots. Cloud native technologies such as Kubernetes, microservices, even driven architectures, and say, serverless computing allow numerous flexibility, no doubt, and scalability, starting with flexible deployment, loosely coupled development options near real time processing and et cetera. But at the same time, they also create risk limiting visibility into model behavior, poor testing of advers scenarios, weak governance in place, and too often a focus on speed over security. These threats are clear and growing every day. Advers attacks that treat can models into making bad decisions. In fact, one of the most dangerous threats come from adversial attacks. This involves carefully crafted inputs designed to fool the AI model into making mistakes while still looking very legitimate to human. Imagine a fraud detection system where an attacker could craft transactions like the AI believes are safe. While in reality, they are frauded and as methods evolve, attackers don't even need insider knowledge anymore. Black attacks, black box attacks now can succeed even with limited information. Data poisoning. That corrupt training data sets is another problem. Subtle but devastating. Attack is data poisoning during training, attackers sleep, malicious data into data sets. The learning process itself from the starting. For instance, a poisoning credit score model might unfairly discriminate against a certain group of people while still looking very effective. Overall, such attacks can persist unnoticed for really long period of time, quietly undermining both fairness and compliance. Model extraction or inversion that steals algorithms and sensitive data is also increasing. Attackers can reverse engineer the AI models by analyzing the outputs. What do they do? They extract the sensitive data such as customer spending habits, our prietary trading strategies, et cetera. This is more than a breach of security. I say this is a breach of privacy and trust. And with AI models increasingly offered via APIs, the attack surface expands dramatically. Let's not forget cloud vulnerabilities itself, like container escapes or configurations, et cetera. Kubernetes has become the backbone of cloud native finance, but it's complex and. Complexity invites risk. These configured, role-based access control can give attackers access privileges. Container images from public repositories may already include vulnerabilities in it. Weak network policies can allow lateral movement between containers. When everything is interconnected, one small misstep can open the entire cluster to attack. Containers, Kubernetes and ephemeral infrastructures definitely bring flexibility to us, but also brings complexity overly permissive, access misconfigured roles, and the difficulty of monitoring short-lived containers makes it very hard to secure AI workloads. This means. Auditability and rapid response are major challenges. Here is a real world fraud detection attack scenario. Imagine this attackers compromise a developer's account exploits weak network policies, corrupt transaction data, pipelines and launchers, real transactions that can successfully bypass fraud deductions. All this could happen without raising any alarm if security isn't embedded throughout the system. Or take trading systems, adversarial study algorithm outputs so they can reverse engineer the logic and manipulate the market in front. Run the institutions own trades. In distributed cloud native systems, such manipulations can very well look like normal activity, making detections even harder where comes the human factor? Yeah. Humans are another important factor in this equation. Technology isn't the only risk. Human factors play a major role. Data scientists focus on accuracy, not security. Common oversight can happen like using unvalidated public data sets, insufficient data sanitization, et cetera. Developers may even overlook secure practices in shared environments, overly broad permissions, vulnerabilities in open source libraries, et cetera, can come into the picture. DevOps. May cut corners under pressure. Performance over security. Rapid development testing cycle can lead to cutting these corners. Lastly, critical skill gap is present. There is a global shortage right now of professionals who are skilled both in AI and cybersecurity. Do. How do we build resilience? In this case, four pillars that we have to abide by. Zero. Trust. Yes. Zero. Trust across all AI pipeline is essential. Strong model governance with versioning and security checks must be in place. Robust data security can ensure integrity, privacy, and lineage tracking. Infrastructure security in place with defense indepth principles tailored for AI can help. In short, we need to establish multilayer defense system. No single security measure is enough. We need defense in depth, application security, data security, AI specific controls, infrastructure, security, everything. By layering defenses, we can protect our AI driven finance evolution. As AI takes a center role in security, it's not enough for our models to be accurate. They must also be explainable. Implementing explainable AI or AI allows us to understand why a system flags a thread, helping build trust. Speed up investigations and reduce false positives. Insecurity, clarity isn't just a feature, it's the necessity in that front. Xai is a powerful ally for us. It also helps us detect anomalous, expose hidden biases, and meet any regulatory requirements. However, we need to be very careful with that as well. As it must be implemented cautiously balancing, transparency, performance, and security. We need to think about multi-layered defense for anomaly detection as well. Security must work across layers, infrastructure, model, behavior, and data layers. By correlating anomalies, say network spikes. With model drifts and data shifts, we can distinguish benign glitches from genuine coordinated attacks Beyond technology, governance is must, risk management should be in place for both technology and business risk. Define clear accountability and ownership for development, deployment, monitoring, maintenance cycles. Continuous audit and compliance should be in place to safeguard, include specific provisions for AI related security events, ethical safeguards, ethical safeguard procedures should be established as well, like bias testing. And fairness, evolutions, et cetera. These frameworks keep institutions secure, compliant, and trusted To move forward, organizations must assess current systems and their risks. Prioritize critical systems, high income, high impact vulnerabilities. Choose technologies that are compatible with cloud native needs. Important human factor, develop skills and possibly create new teams focused on AI and security together. And also we must prepare for future challenges such as quantum threats, evolving regulations, and an expanding AI a surface. With this, I will conclude my talk. I hope you enjoyed. This session and I hope you enjoy the rest of the conference. Thank you. And you can reach out to me in LinkedIn. Look up Manta. I am based in New Jersey and thank you again for listening to Mike.
...

Manisha Sengupta

Assistant Vice President Systems, Investor Services Technology @ Brown Brothers Harriman

Manisha Sengupta's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content