Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
My name is Tta Eshi and I'm a lead product manager or a principal
product manager at Palo Alto Networks.
I have over 12 years of experience working in the tech industry, and
for the last seven and a half years I have worked as a product manager
mainly in the cybersecurity space.
My area of expertise include AI security, container security, and runtime security
to protect organizations applications running on public cloud and private cloud.
Today I'm here to talk about how to implement AI security so that enterprise
can innovate and introduce AI applications in their environment confidently.
So let's get started with, let's start with AI adoption.
As many of you might already know, AI is consistently gaining
traction in the industry today.
According to the recent PWC report, over around 86% of the executives
are expecting that AI will become the mainstream technology in 2025.
Out of these 86% of the companies.
60% of the organizations are planning to integrate AI technology into mission
critical infrastructure, and this was what makes AI security really important.
Because with introduction of AI security into the mission critical infrastructure,
the exposure or the risk related to AI security increases exponentially.
L. Before we talk about the security aspect, let's talk about why AI is
gaining so much of traction in the industry in the last couple of years.
The first main reason is enhance decision making.
AI can analyze large sets of data, identify the pattern and
insight that might be difficult for the human being to identify.
Okay.
Based on that data, it can come up with a recommendation.
It can provide the forecast, allowing organizations to make
business critical decisions.
Secondly, it can automate the repetitive task and it allows employees
to focus on more innovative work.
On top of that, AI can optimize the workflows.
It can reduce the human error, and overall it improves the efficiency,
thus allowing organization to reduce the cost and roll out the application
into their environment very quickly.
The third reason for AI adoption growth is it opens up new business opportunities.
AI can help companies develop new products and offers to their customers.
AI can analyze the market trends and allow organizations to reach to new markets
or new geographical regions as well.
Lastly, improved customer experience.
So recently I attended the cybersecurity conference called RSA in San Francisco.
And I, as I was talking to a lot of customers and trying to understand
what different players in the industry are doing, I saw that lot
of customers or lot of organizations have already started to roll out AI
chat bots and virtual assistance for their customers because this allows
organizations to provide 24 7 customer support on top of that organizations.
Use AI to personalize the marketing campaigns offer recommendation based
on the customer preferences and data.
On top of that, AI can potentially identify the problems early on before
it impacts the customer, thus improving the customer experience overall.
And these are the reasons why AI adoption is improving significantly in the
industry in the last couple of years.
Now that we know why AI adoption is growing in the market, let's talk
about some of the security threads that are introduced because of ai.
First of all is the data vulnerability.
All the organizations use training data to train their AI models.
Bad actors can take advantage of this, and they can poison the
training data, which will impact the integrity of the AI system.
Second reason is the privacy concern.
Imagine the situation where AI chatbots is exposing the SSN number of the
customers, or it is providing critical health information, the people who don't
have the right access or privileges.
So privacy concerns are going to increase with adoption of ai.
Third reason is the AI generated threats.
Bad actor can take advantage of sophisticated defects.
They can create the misinformation campaign and they can create the
sophisticated cyber attacks at scale, which might be difficult to detect.
The other aspect that companies need to consider is the reg regulatory compliance.
Because compliance needs are different for different geographical regions
as well as different industry.
For example, finance industry cares about FIPs compliance.
Healthcare industry cares about HIPAA compliance.
So organizations that are rolling out AI application needs to be aware of potential
penalties that might impact them if they are not following the right guidelines.
Let's talk about even more advanced security threats that
are introduced because of ai.
AI has started to create the realistic impersonations that will
bypass the traditional security, and they are specifically targeting
executives with this impersonation.
On top of that, they are creating deeply personalized attacks at scale that they
couldn't do it without the AI technology.
They are personalizing the email campaigns.
Those to gain the entry into the organization, and once they are inside the
organization, they try to move this attack laterally to the rest of the organization.
Next is model poisoning.
Just like they can poison the data, the bad actors might try to
poison the model as well, that way.
The integrity of the model might be impacted, and it might start
giving the outputs, which are not the right or not recommended
for the customers or end users.
Three versus real attacks because bad actors can provide different inputs to the
AI models to see what kind of information that AI model is revealing now.
Basically a, based on this, bad actors can reverse engineer on what
are some of the vulnerabilities of the models, and then they can use
that against the organization to gain access to the sensitive information,
which otherwise they won't have access to because of all these reasons.
Security of AI application is becoming really critical because
that is the only way that will allow organizations to roll out AI applications
confidently, either to their internal customers or to the end customers.
Let's talk about what would be the implications if AI applications
are not secured properly.
First of all, there will be a huge financial impact.
If there is a data breach, then there will be a huge financial loss for the
organization, and the recovery cost would be significantly higher than
the investment that they can make in protecting the AI applications.
Secondly, there might be a huge impact on the reputation if customer
sensitive data leaves the organization and comes in the hands of bad actors.
Or there might be a ransomware attack, then there's a impact on the
re reputation of the organization.
This impacts customer retention and hence the revenue of the
organization in the long run.
Lastly, there are legal consequences if regulations are not followed
properly because of all these reasons.
Organizations need to invest in AI security now so that they can confidently
roll out AI applications to their end customers and internal users confidently.
Stop.
Now that we know AI security is really important to ensure AI adoption across
the industries, let's understand the key aspects of AI security.
We will start with secure model deployment.
First of all, the model should be protected all the time.
It should be protected against E DDoS attack.
On top of that, the data transfer that is happening between application and the
model should be secured all the time.
On top of that model should be protected against prompt injection
attack as well as we need to, or the organization need to ensure that.
There is no data leakage to ensure organization's sensitive data
is not leaving the organization.
The next important aspect of the AI security is the data
security and privacy control.
The data which is resting in the database or in the files
should be encrypted all the time.
So that even if the bad actors are able to somehow get access to the database or
the fire system, they still won't be able to use that data for the bad purposes.
On top of that, there should be a rule-based access scroll in place
that will ensure that users who have the right privileges, they are the
only ones who are able to access the application or a access the database.
That way, the attack surface is minimized significantly and the chances of data
breach reduces significantly as well.
On top of that, one of the commonly used architecture in the industry
is zero trust architecture, which essentially means trust.
No one authenticate everyone back in, back around 15 to 20 years back.
When cloud was not a big factor, the architectures or the systems were
designed in such a way that all the connections or all the requests that were
coming from within the organization's infrastructure were considered valid.
But any connection or any requests that are coming from outside the organization,
infrastructure were considered invalid.
Now that AI adoption on the cloud adoption has grown significantly
and data resides in the data center, but in the public cloud as well.
This architecture that was used 15 to 20 years ago, that is no longer applicable.
So with zero trust architecture, any connection request is authenticated
to make sure it is coming from the valid user or valid application.
And many times users, it is recommended that multi-factor authentication is place
is in place so that even if bad actors get access to let's say email or the
text messages of the user, there are with multi-factor authentication, the chances
of the breach reduced significantly.
On top of that, any system or any user who needs to access
any database or any system.
They should be given the least privileges that they will need to
in order to do their job, in order to ensure that even if a part of the
infrastructure is in, in infected, because of the malware, micro segmentation
policies are really important.
These policies ensure that threats are not moving laterally
to the rest of the organization.
So that even if part of the infrastructure or some of the applications are
inspect infected by malware or CVE, the same malware won't infect
rest of the infrastructure as well.
The other thing on top of micro segmentation, which is
important to consider, is the runtime security protection.
So many of you might have heard about log four J vulnerability that was detected
in a software back in December, 2021.
Even though it existed in the software for seven plus years, so the organizations
that are deploying all the right security policies, they and they are
patching their applications from time to time against the vulnerabilities.
They are still vulnerable to unpacked and unknown vulnerabilities.
And that is why runtime security protection where we are continuously
mon monitoring the traffic and detecting any anomalies or threats.
It is important to have that all the time.
On top of that, it is good to have the monitoring tool so that in it can inspect
any anomaly that might be vulnerable.
To ensure your or organization's sensitive data stays within the organization.
Next, let's talk about building the AI system.
So many times in the organization what happens is while they are designing the
architecture or the system architecture, security is not considered upfront.
It is an afterthought many times, but that shouldn't be the case while
designing the architecture itself.
Software engineers and security architects need to make sure the system and the
architecture is secure by default.
On top of that, once the architecture is in place and development is in
progress, it is still important to do continuous assessment to ensure
the system is secure all the time.
Monitoring tools will help the organization to check any
anomalies or unauthorized access.
So the system is secure by the, on top of that, it is important to have the recovery
planning and the backup available for your system and important information
so that even if you are attacked, once you recover from the attack, you can
quickly recover from your backup data.
You can restore your website and the database and the system so that the
disruption to the end users is minimal.
Next, let's talk about securing AI lifecycle.
As I mentioned in the previous slide, security starts from day one when
you, when the organizations are starting to build the architecture
or infrastructure for the product.
So it is important to have comprehensive security requirements
while building the new system.
On top of that, it is important to decide how the data that is saved
will be openly while developing the models or developing any applications.
It is important to have the secure coding standards so that as in when the system
is getting built, it is secure by design, rigorous access control on who can access
what should be implemented from day one.
As in when the system is getting built, it is important to do thorough pen
testing and the stress testing to ensure your system doesn't break when attackers
are attacking from all the directions.
Additionally, regular vulnerability assessments and attack simulations,
how to take place on the system to identify any cvs and.
Fix them as soon as they are detected so that the bad actors cannot take
advantage of that vulnerability to steal organization's sensitive data.
Finally, as I mentioned in the previous slide as well,
continuously monitoring your system for anomaly is really important.
For example, let's say if you, if the organization has all
their offices in United States.
All of a sudden, let's say an application or a user who might be sitting in Asia or
Africa tries to access the applications or data, which is located in United
States, they shouldn't be able to detect those as in when such vulnerabilities
or such anomalies are detected on time, that is when security can improve.
Continuously and ensure that your AI applications or any applications
for that master are completely secure and organization data is
safe, and then it is not going into the hands of the bad actors.
Next, let's talk about the regulatory landscape over the years, different
industry located either in United States or outside United States.
They have come up with their own compliance requirements.
For example, HIPAA is a very commonly used compliance framework
for medical professionals.
FIPs is very commonly used by the finance industry, and then there are other
regulatory standards like Fedra as well.
On top of that, GDPR is very commonly used in the EU region.
GDPR enforces comprehensive data protection requirements
for all their systems.
It requires explicit consent, it requires data minimalization, and on top of that,
organizations have to provide the right explanation for the automated decisions.
Now with the,
with AI adoption increasing day by day.
EU is proposing a new AI act which will provide all the regulations on how to use
the AI applications, categorize the AI application by the risk level, top the
prohibited practices, and then provide the transparency obligation so that the
data that is used to train the AI models or the data that is used to make systems
better, they are used in the right way.
Similarly, all the other compliance that I mentioned, hipaa, fs, FedRAMP, they are
taking into consideration the fact that AI adoption is increasing, so they are
changing the regulation so that it works well with the AI applications that will be
rolled out in now or in the near future.
Uchen.
Before I end this presentation, I also want to take in into the
consideration the ethical consideration, first of all, is the fairness.
When we are training the AI model, we need to implement the rigorous security
measures to prevent the bias against any specific any specific diverse population.
It has to ensure demographically balanced traffic dataset.
Second important thing to consider is the transparency.
The AI has to balance the rubbish security with maintaining the system.
On top of that, there should be detailed documentation on all the
security controls, and they should be updated from the, from time to ensure
the architecture and the system is secure against new vulnerabilities.
On top of that, it is important to establish the explicit data usage
policies, and it should be ac accessible by all the important stakeholders.
Third important aspect to consider is the privacy respect.
So individual rights and autonomy should always be considered
while training the AI models.
Then on top of that, AI models need to apply data minimization principle
to reduce the vulnerability surfaces, and all the systems should be
patched for cvs from time to time.
Finally, no system is perfect.
So in order to ensure AI is doing the right thing, human
oversight is absolutely important.
So just in case if AI is doing something wrong, human should be
able to override that mechanism that, or steps that AI is performing.
On top of that, there should be a structured human process review.
So that all the critical decisions are reviewed to make
sure AI is behaving correctly.
And if AI is not performing up to the mark, humans should be able
to override them all the time.
So finally, let's talk about the key takeaways of this presentation.
As I mentioned before AI adoption is increasing rapidly in order to
ensure organizations are able to.
Successfully deploy AI applications in their environment.
Security is very critical, so security should be taken into the
consideration from day one, so it should be security by design.
On top of that, customers or organization should develop microsegmentation
policies, r back policies.
They should protect their AI applications, models, data.
And they should continuously evolve their security practices to ensure AI
applications are secure and none of their sensitive data leaves the organization.
Like on top of that, they also need to comply with the regular compliance
frameworks such as hipaa, GDPR, FIPs, and any other frameworks that are applicable
for that particular organization.
And ethical considerations are really important.
While training the models and human oversight should be taken
into the consideration and should be treated as a top priority to
ensure AI is not doing something that it is not supposed to do hope.
This presentation provided you all the insights that you needed
to ensure how organizations can secure their AI applications.
So that they can confidently roll out AI applications either for their
employees and for the end customers in order to improve their operational
efficiency, reduce the cost, increase the customer satisfaction, and
increase their revenue and position in the market in the long run.
Thank you so much for your time.
I really appreciate it, and if you have any questions,
feel free to reach out to me.
Thanks a lot and have a nice day.