Conf42 Kube Native 2025 - Online

- premiere 5PM GMT

Building Resilient AI: A Framework for Enterprise Security and Governance

Video size:

Abstract

Discover how to turn AI security and governance from a compliance headache into a competitive advantage. Learn practical strategies to safeguard data, ensure trust, and build resilient AI systems that accelerate innovation without compromising on risk.

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi, I am Maia Pshi. I am from Manipal University India, currently working at Meta Platforms, and I'm happy to be speaking at count 42 Co native 2025. I have a lot of experience on ai and we have developed some very interesting AI tools here at Meta, but this is my own research on building resilient ai. Something which most of us may not think about is about an enterprise for, about a framework for enterprise security and governance. So we are transforming AI security and governance from a compliance burden to strategic advantage. So let's look about it. So the first one is the AI transformation challenge. Artificial intelligence isn't just changing. Business is basically fundamentally redefining enterprise operations. This is from streamlining automated decision making to supercharging, predictive and analytics. AI systems are now critical engines driving productivity and innovation across every industry. So we all know this, but what are some of the security risks related to ai? Let's talk about that. In a bit detail now. So what are different areas? What I've found is the critical AI security risk, so this can be classified into data leakage, model manipulation. Compliance features, bias and fairness. So wherever there comes ai, we know that there could be a data leakage issue, which is unauthorized exposure of sensitive data, which is PII, trade secrets we know about. We know all about the different unauthorized exposure to the PII data. Now, the next one is the model manipulation, right? It's a malicious alteration of the AI behavior. Because it's in the hands of who is developing it, how it is being developed, and what are the different sources to which it has some access to compliance failures. Of course someone is developing ai. It doesn't mean that they have done all the compliance checks on this, and of course the last one, but not the least, is the bias and the fairness, which we all know about. There could be bias in the ai. Now talking about the innovation risk gap, okay, so let's talk about the traditional security frameworks are proving like critically inadequate for dynamic and complex landscape of the AI systems. And the laing disconnect is, generates the significant vulnerabilities for this. Next we are going to talk about the framework foundation. Which consists of four core pillars. The first one is the governance structures, which is establishing robust governance frameworks with clear roles and responsibilities to ensure accountable oversight and strategic decision making for AI systems. The next one is the technical safeguard, which is like deploying. Advanced technical safeguards including cutting edge encryption and granular access controls to fortify AI environments against evolving threats and data breaches. Compliance alignment is the next one, the operations seemingly with the global regulatory compliance and proactively doing it. And the next one is continuous monitoring. Without continuous and dynamic monitoring. That is also one of the, one of the core pillar of the framework of the foundation. Moving on the comprehensive AI governance framework, so AI governance code, which is comprising senior executives and cross-functional leadership. This board bold spearheads the strategic direction. Gratifies critical policies and optimizes resource deployment for all AI initiatives. That's the board which is primarily responsible for it, AI ethics committee. Now, this becomes very important when there are like cross-functional teams and there are legal counsels, which who needs to be involved in the AI development. Because when you are developing something, the ethics has to be kept in mind and there has to be finally a review panel, right? So review panel consists of like expert engineers, security specialists, compliance officers, and the panel forms rigorous technical evaluation of the AI systems. The engineers may not be equipped enough for all of these, so that's why there needs to be all of these. Framework. Let's look at the fortifying. AI excellence, strategic identity and access management. This is how do you get access? How? How does everyone get access to this? And who are the people who gets access to it? And this is like implementing a robust and multifactor authentication. We all know it to secure the critical AI systems access. Defining and enforcing granular rule-based access meticulously tailored to dynamic AI development and deployment functions. Now, streamlining the operations is the next big thing. Dynamic access policies. We talked about that and the traditional IAM identity and access management frameworks, which often fall short of and struggling to address complex dynamic access platforms. The focus is. On development of the AI and nobody thinks about identity and access management, and this point of mine is fortifying all of these for AI excellence. Next is something which is real timely detection and how do we do it? Is data ingestion like it consists of this, all of these things, data ingestion, then response, coordination, pattern analysis. Alert generation. So data ingestion is seamless real time ingestion and continuous monitoring of the critical AI systems. And then comes to response coordination, which is coordinated incident response and remediation. And the pattern analysis, sophisticated machine learning algorithm. Proactively detecting and pinpointing subtle deviations and critical anomalies from established operational baselines. Alert generation is the last one, but not the least, which is AMA automated high priority alert generation for rapid notification of security incidents, curricular performance issues, and of course potential compliance violations. So this forms the real time anomaly detection. Then automated compliance workflows towards that. Let's look at it. So that is like regulatory mapping, identifying applicable regulations, G-D-P-R-V-C-F-C-C-P-A, industry specific requirements and MAP requirements to AI operation. The second one is control implementation, deploying automated controls for data handling, consent management, and audit trail generation. Next is continuous assessment. We talked about it, which is compliance checks, C analysis, and remediation tracking. And next is the reporting generation. Automated compliance reports, regulatory submissions, and stakeholder communication. Oh, let's look at monitoring and explainability tools. So model performance monitoring comes under it, which is real time accuracy and drift detection, performance degradation alerts, resource utilization, tracking output quality assessments, and explainability features, which is decision pathways importance analysis and mitigation. So comprehensive monitoring bills stakeholder trust by providing transparency into AI decision making process. And ensuring consistent, reliable performance. Next is building stakeholder trust. So you know, here is where the stakeholder comes into picture and how do we build that trust with the stakeholder and the executive Confidence is the first one. Well-defined. Governance frameworks and robust risk management strategies provide leadership with comprehensive oversight. And strategic control. All AI initiatives enabling informed decision making. Then there is regulatory readiness, which is efficient compliance workflows and through audit trails ensuring consistent clearance to regulations. Employee assurance, which is also important, is transparent. AI operations and clear ethical guidelines cultivate a workplace where employees confidently integrate. AI augmented processes fostering innovation and collaboration. Now let's talk about some of the benefits. Fortified risk mitigation is one of the benefits, which is proactive monitoring and robust controls, which drastically reduces scale security incidents. Next is accelerated cost Efficiency is one of the benefit, which is automation slashes. Manual compliance efforts and ex expedites incident resolution. And the next one is enhanced stakeholder trust, which is transparent and accountable governance, which cultivates unwavering confidence from regulators strengthens customer loyalty and empowers internal teams fostering widespread adoption support. Expedited AI deployment, integrated security controls and streamlined approval pathways, which accelerate the launch of new AI initiative. You don't have to go back every time and do this all over again, every time when you're doing it. This is like once, once it is established and all the things are in place that we have expedited AI deployment. And then there is strategic competitive advantage. Of course, there's a lot of competition in the industry and implementing some advanced framework like this is basically redefining AI security and governance, transforming it from a more compliance burden into a powerful strategic differentiator. And this framework is engineer engineered to fuel sustainable innovation, providing crystal clear guidance for the responsible AI development. Organizations can now boldly pursue ambitious AI initiatives. Assured that comprehensive safeguards are firmly in place and with this strategic approach, the AI. This positions the AI security and governance as a potent enabler of innovation and not em impediments. It's basically in the right direction, the positive direction for this now future proofing your AI innovation. It's elevating your AI security and governance from a challenge to a distinct competitive advantage. And this consists of three points here. How it does it, future proof is strategically assess AI readiness, activate the core component, core framework component, and cultivating unwavering stakeholder trust. We all talked about this in detail, but these are, this is future proofing your AI innovation. Alright, so that's the end of my presentation. And thank you very much.
...

Maurya Priyadarshi

Manager, Salesforce Applications @ Meta

Maurya Priyadarshi's LinkedIn account



Join the community!

Learn for free, join the best tech learning community

Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Access to all content