Transcript
This transcript was autogenerated. To make changes, submit a PR.
Okay.
Hello everyone.
I'm Karti Gain ami.
Today I'll be presenting my work on Cloud Native Defense in depth
security for mission critical services in managed Kubernetes.
This presentation focuses on how mission critical services can be securely deployed
in managed Kubernetes environments.
Let me walk you through the key topics I'll be covering today.
Here's the roadmap for today's talk.
I'll begin with an introduction that sets the context for why BER security
is important, especially in managed services like a KS, EKS and GKE.
Next, we will define mission critical services and explore
examples from industries like.
Finance, healthcare, defense, and supply chain.
Then I will highlight the security challenges in managed Kubernetes,
such as insider threats, runtime exploits, and misconfigurations.
We will move on to the cloud native securities forces cloud cluster
container and code, which form the layer, the defense model.
We will then look at practical security measures for each layer.
First, the cloud boundary, then cluster boundary, container boundary,
and finally, the core boundary.
At the end, I will wrap up with key takeaways.
Kubernetes has become the foundation for modern cloud native infrastructure.
It allows organization to deploy and scale applications efficiently.
Most enterprises today use managed Kubernetes service such as EKS Azure,
Kubernetes Services, EKS, Amazon ela, elastic Kubernetes Service,
and GKE Google Kubernetes Engine.
While these managed services simplify operations and use managed or management
overhead, they also introduce new security risks, especially for.
Sensitive or mission critical workloads, for example, control plane exposure,
mis misconfigurations or insider threats can have serious consequences in these
environments to address these challenges.
The work, this work proposes a defense in depth security model based
on four csof Cloud Native Security Cloud cluster container and code.
HC represents a layer of defense that builds on the one below it.
Creating a defense creating a comp, sorry, creating a comprehensive
layered protection strategy.
It helps the system more secure.
Now that we have set the context, let's take a closer look at what I
mean by mission critical services.
And why securing them in Kubernetes environments is so important.
In this slide, I define what I mean by mission critical services.
These are software systems that are absolutely essential
to an organization operations.
If they fail, the business faces major financial safety
or reputational consequences.
For example, in finance, this includes payment processing platforms or fraud
detection systems that must operate securely and continuously In healthcare,
mission critical systems include electronic health record platforms
and medical monitoring software that directly affect patient care.
In different sector, we are talking about secure communications and intelligence.
Al systems that must remain confidential and resilient, and in the supply
chain examples include system responsible for code signing, security
scanning, and package verification.
Each of these use cases handles sensitive or regulated data, so deploying them in
a managed Kubernetes environments demands and extremely high level of security.
Now we have understand what mission critical services are.
I will move on to discuss the security challenges organizes organization
faces when deploying them in managed Kubernetes environments.
Now let's look at some of the key security challenges that we, that arise when
deploying mission critical workloads in managed Kubernetes environments.
First, there are insider threats from administrators, whether they are cloud
admins, cluster admins, or tenant admins.
Each level has privileged access, and if any credentials are compromised,
it can expose sensitive workloads.
Second misconfigurations and weak authentication, or very common simple
mistakes like using overly PERCY rules.
Public control plane endpoint or missing encryption can open up large attack
surfaces, the third attack attacker.
The third attackers can inject malware into containers.
This can happen through vulnerable base images, compromised registries
or during the CACD pipeline.
Next runtime exploits and privilege escalation allow.
Attackers to escape from containers and game control over
the host or cluster resources.
Finally, supply chain compromises are becoming more frequent adversaries,
target build systems, image registries, or dependencies to insert
malicious code before deployment.
These challenges highlight why managed Bernet security requires more than just.
The default protections offered by cloud providers.
We need a layered defense in depth approach.
Now let's talk about the defense in depth of security architecture, which forms
the core of my proposal proposed model.
The idea here is simple but powerful, rather than relying on a single
layer of protection, we apply.
Multi multiple layers of controls across all four domains of cloud native security.
The cloud cluster container, and code.
Each of these layers reinforces the others, creating a cumulative
defense that significantly reduces the chance of successful attack.
This model also follows a zero trust approach, which means.
Every request user and workload is verified regardless of
whether it is originates inside or outside the environment.
By enforcing zero trust principles at every layer, we can prevent
unauthorized access and lateral movement within the Kubernetes environment.
This layered security design helps mitigate key risks such
as insider threats, supply chain attacks, and runtime exploit.
Which are common in managed Kubernetes environments.
What is also important is that this model leverages built in security
integrations available in managed services, meaning organization can
strengthen their security posture without adding excessive complexity.
Next, I will begin breaking down this architecture by layer.
Starting with how to secure the cloud boundary, which is the
outermost layer in the four Cs model,
this section focuses on securing the cloud layer.
The first C in the cloud native security model.
The hubs spoke network technology, sorry, network topology, provides
centralized control and visibility.
The hub Virtual network manages routing, inspection and security policies.
While spoke virtual networks isolate workloads for stronger
segmentation and reduce the risk.
Strict firewall rules are applied to restrict both inbound
and outbound connections.
Only authorized communication parts are permitted, reducing exposure, and.
Limiting potential attack vectors.
Together these measures establish strong perimeter production.
They minimize external exposure, shrink the attack surface, and enforce zero
trust principles at the network boundary.
Together these measures establish strong pyramid.
Using private clusters and endpoints ensure that control planes and container
registry communicate only through internal IP address keeping sensitive
management traffic of the public internet.
With the cloud boundary secured, the next layer of protection
focuses on cluster boundary where.
Access control and internal communication, secure security become critical.
Before we talk about the next c let's quickly, recap, what security gaps
still exist after the foresee Cloud?
The cloud layer focuses primarily on securing the external network boundary.
It provides protection from internet based threats through firewalls, network
segmentation and control connectivity.
At this stage, all inbound and outbound communication is monitored
and validated to ensure that only trusted traffic flows into.
Are out of the Kubernetes environment.
However, this protection is limited to the network perimeter.
While it successfully reduces the exposure from external lacs, it does not address
risk that exist inside the environment.
The next layers cluster container code contain components that remain
vulnerable to internal threats mis privileged misuse, and time exploitation.
Therefore deeper security measures are required to harden these inner
layers, which will be in the following sections of the architecture.
The next focus area is the cluster boundary where the goal is to
protect internal components such as a PA server ETCD storage nodes,
and interport communication.
Securing cluster boundary represents the second layer of defense in
the cloud native security model.
At this layer, the focus shifts from external network production
to securing internal Kubernetes components such as AP server, ETCD,
storage, qbl, and port communication.
First, a private AP server combined with.
Rback integration limits control plane exposure by using private endpoints
and enforcing strong authentication.
With role-based access controls, one, authorized users can interact
with cluster resources following the principle of least privilege.
Next ETCD storage, which stores all the cluster configurations on secret
must be secrets must be protected.
Encrypting ETCD data addressed using key management service.
KMS ensures that even if storage is compromised, sensitive data remains
inaccessible to protect and node access.
Anonymous access should be disabled ports restricted and network policy
supplied to prevent unauthorized interaction with lower level services.
Implementing network segmentation through Kubernetes network policies
are cloud native firewall limits, part to part and known space
communication effectively reducing the lateral movement within the cluster.
Finally OPA gatekeeper admission control controller enforces
compliances compliance and policy rules before workloads are admitted.
Examples included blocking privileged parts, ensuring image signatures or
verifying configuration standards.
Together these controls strengthen the internal security posture
of Kubernetes environment by safeguarding its most critical
management and operational components.
With cluster layer secured attention now shift to next layer.
What security gap still exist after the second C cluster?
At this stage, both the cloud layer and cluster layer have been secured.
The network perimeter is protected and strong controls, such as
private endpoints or back encryption admission policies are in place.
However, these controls primarily protect the infrastructure and
management components of Kubernetes, not the workload themselves.
The next major security challenge lies within container layer where
vulnerabilities can still be exploited even in a well secured cluster.
Attackers may attempt container breakouts exploiting kernel vulnerabilities,
or injecting malicious code into.
Running containers through compromised images or runtime downloads.
The following section explains how confidential VMs confidential containers
enhance container isolation and prevent host or admin level tampering.
Cut securing container boundary.
This section focuses on the container boundary.
The third layer in the cloud native defense in depth model.
While the cloud and cluster layer layers are addresses into external and control
interest, the container layer deals with risks inside the workload itself.
Pos and containers or address of breakout or kernel exploitation if,
if a container is compromised, it can potentially escape to the host,
exploit kernel vulnerabilities, or move laterally across workloads, threatening
the integrity of entire cluster.
To mitigate this confidential virtual machines cvms or used
to protect node level workload.
C VM CVMS utilize hardware backed trusted execution environments.
These to encrypt memory and CPU to isolate a virtual mission from hypervisor
and host preventing unauthorized access from infrastructure layers.
Building on that confidential container, extend this protection
to individual containers.
They run inside Secure enclaves ensuring that application code and data in use
remain protected even if the kernel container runtime or host is compromised.
Together, these confidential computing technologies prevent host or admin
tampering ensuring that even privileged users, cloud operators, or compromised
system components cannot access or alter these sensitive workloads.
This approach brings a new level of assurance to running mission
critical services by safeguarding the runtime environment from both
external and insider threats.
With the container boundary secured, the next focus is on the final layer.
Before that let's recap.
What security gaps still exist after the third C container layer?
At this point, most of the external and infrastructure layer
threats have been mitigated.
As you see, as you can see in this picture, protections from the hypervisor
host, or guest agents and peer ports or containers are already in
place through confidential computing and strong isolation controls.
However, the remaining risk lies within the code itself.
The application logic running inside the container, even with the strong
TCB test computing base provided by the confidential containers, the
code that executes at runtime can still introduce new vulnerabilities.
Examples include code that downloads external packages, bonds,
unauthorized processes, or execute unverified scripts at runtime.
All of which can bypass inte security boundaries.
This residual risk highlights the need for controls that protect not just the
container environment, but also the behavior of the code running inside it.
The next and final layer, the fourth C code, addresses this exact half gap
by applying runtime integrity control.
System called restrictions and continuous monitoring to ensure
security ensure secure execution.
So the following section explains how the code layer uses measures
like trusted image signing immutable containers sec com and fall code two
completely mitigate the runtime threats.
Securing code boundary.
The code boundary represents the fourth and final layer of
of the defense in depth model.
This layer focuses on securing the code that executes inside containers
to ensure that runtime behavior remain controlled and trusted.
First using tested image registries and signing ensures that 1D verified
and cryptographically signed container images are deployed.
This step significantly reduces supply chain risks by preventing
the use of unverified or tampered images for Linux workloads.
Aless image provide an additional safeguard.
These images do not contain shells package managers or other utilities that
attackers could exploit, reducing the attack surface to one leak components
required for the application to function.
Next, immutable containers prevent any modification after the deployment.
If updates are needed, a new image must be rebuilt and redeployed,
ensuring that runtime, tampering, or unauthorized changes cannot occur.
The second security feature adds another layer of protection by
restricting system calls that processes or mounting file systems et cetera.
Secom helps block malicious runtime actions.
Finally, file Code Protect provides a realtime runtime monitoring.
It continuously observes container and system activity.
Detecting an anomalies such as privilege, escalation attempts,
unexpected file access, or the execution of unknown processes.
Alerts are generated whenever the suspicious behaviors occurs together.
These security controls ensure that the code running inside containers remain
verified, immutable, restricted in behavior, and continuously monitored.
Effectively closing the final security gap in four Cs model with all four layers.
Cloud, cluster, container, and code secured the defense.
In depth architecture now provides comprehensive protection across
the entire Kubernetes environment.
What gaps, what security gap still exist after the fourth C code?
At this stage, the defense in depth model has fully implemented.
Every layer from cloud to code now has its own set of protections.
With immutable container system called restriction and real time
monitoring in place, the code layer effectively closes the last known gap.
At this point, there are no remaining vulnerabilities within the defined layers.
All the major attacks, surfaces from the infrastructure to
runtime have been addressed.
The overall security posture becomes significantly stronger because each
sea reinforces the one below it forming a continuous chain of trust.
This means that even if an attacker somehow breaches one layer, the remaining
layers continue to protect the system.
In other words.
The combination of all four layers, cloud, cluster, container, and code delivers
a comprehensive end-to-end defense model for mission critical workloads.
Next, let's move on to the key takeaways where I will summarize the benefits of
this defense in depth approach and how it enables strong protection without
compromising the developer agility.
Key takeaways.
So to wrap things up, here are the main points I want to leave you with.
First, mission critical services, whether it is payment system, healthcare
applications, or defense workload need more than basic protection.
Their defense in depth approach gives these workloads the level
of security they truly require.
Next.
It's important to remember that the default cloud provider, se,
provider security, things like IM firewalls, net and network
rules is only part of the story.
Those features are great, but they don't fully address modern multi-layer
attacks that target the supply chain are runtime environments.
That's where the real strength of defense in depth comes in by
layering controls across all forces, cloud, cluster, container, and code.
And combining that with zero trust, confidential containers and runtime
monitoring, you can close every gap from infrastructure to application.
In short, defense in depth gives us a way to protect mission critical
workloads without slowing down innovation, that software today.
Thank you for listening.