Transcript
This transcript was autogenerated. To make changes, submit a PR.
Good morning everyone.
I'm Priya mug from Stanford Business School.
Today I'll be discussing about scaling trust VM security with confidential
computing and zero trust architecture.
This topic is important in today's world because we are at a critical
inflection point in cloud computing where innovation is outpacing
the traditional security models.
In my research and my product experience in cloud area cloud domain focuses
on bridging this gap, how to make virtualized cloud environments not
only performant, but also inherently safe, secure, and trustworthy.
This work is what led to my paper and ongoing contributions in confidential
and trusted VM architectures.
As cloud adoption reaches nearly 94% of enterprises, the volume
of sensitive workloads on public infrastructure has exploded.
More than 75 percentage of that data is confidential, which includes
financial records, patient information, or even proprietary algorithms.
Traditional perimeter based VM security simply wasn't built
for the scale or exposure.
The threat landscape now includes cross tenant attacks, memory
scraping, and hypervisor breaches.
The cloud success completely depends on trust.
If customers can't be sure if the data that they have is safe while it is
being processed, they would hold back the mission critical workloads, and
my goal here is to examine how we can extend the protection beyond storage and
transport into an active compute layer.
Here's our roadmap for the talk today.
We'll start with confidential computing, which protects data while it's being used.
Then we move to trusted launch, ensuring that VMs vote only from
verified and uncompromised components.
We'll connect these to zero trust architecture, which assumes no
implicit trust inside the network.
And finally, we'll take a look at the practical implementation,
how to bring these principles together in real environments with
compliance and performance in mind.
Confidential computing addresses what many call the final frontier of data
security, protecting the data in use.
It leverages hardware based, trusted execution Environments
are called tes, that encrypt data even while it is being processed.
This means that sensitive computations happen inside,
isolated and encrypted enclaves.
Invisible to the cloud provider or the system administrators.
Unlike traditional encryption, which stops once the data is loaded into
memory, tees extend protection throughout the entire lifecycle of the workload.
This changes the security equation completely.
It enables industries like healthcare, finance, and AI to move
sensitive workloads to the cloud that they previously couldn't risk.
And my work here highlights how hardware level innovation is unlocking new
business models for data collaboration.
Let's talk about the leading TEE technologies.
There are two major technologies in this space.
The first one is A-M-D-S-C-V-S-N.
And SEV stands for Secure Encrypted Virtualization and SNP is secure Nested
Paging, which provides per VM memory encryption with integrity checks.
It prevents hypervisor level attacks and offers attest station for boot integrity.
The second one is Intel TDX, or trusted domain extensions.
Which isolates workloads at the CPU level, creating trust domains
enforced by hardware boundaries.
It supports remote attestation and adds minimal performance overhead.
Now, both of these are foundational for building verifiable
confidential workloads in the cloud.
These technologies are the foundation for confidential computing, and in
my research and cloud experience at Azure, we've had to evaluate
and integrate both technologies at hyperscale, ensuring compatibility,
performance, and policy enforcement.
This slide represents the core innovation that I've been part of
translating into Azure's VM ecosystem.
Now let's look at how organizations are using these in practice.
Secure multi-party analytic analytics enables collaboration across
organizations without exposing new data.
The confidential AI training allows medical or financial data
to be used for model training.
Securely protected key management ensures cryptographic keys
never appear in plain text.
And why does all matters because these aren't just theoretical.
They are how regulated industries can embrace the cloud responsibly.
The practical adoption stories I've worked on, particularly with financial
workloads shows how confidential computing builds confidence
across compliance heavy sectors.
Let's dive into what Trusted Launch is.
It is all about ensuring the boot integrity.
Yts protect data.
Once a VM is running trusted launch secures, what happens before that,
which is the boot process itself, it ensures that VM starts only with
verified untempered components using TPM based cryptographic attestation.
Think of it as a chain of trust from hardware to firmware to
the operating system itself.
If any component is modified or injected with malicious code, the
attestation fails preventing the VM from launching, and that's how
we eliminate root kits before even workloads even be boot time verification
closes one of the biggest historical security gaps in virtualization.
My research examined how this feature could be standardized across heterogeneous
fleets, which directly informed how large scale clouds can offer verifiable
trust from power on to runtime.
Let's look at how trusted launch works.
Trusted launch is anchored on four steps, hardware, root of trust, the TPM 2.0
chip stores cryptographic measurements.
The second one is measured boat.
Each boat component is recorded in platform configuration
registers, then remote attestation.
Those measurements are compared to good known good baselines.
And finally, the automated enforcement, which are policies in the orchestrator,
ensure that only trusted VMs can run.
Okay.
Together these create a verifiable and a repeatable security posture,
one that scales to hundreds and thousands and millions of VMs.
This architecture makes cloud trust provable enterprises and governments
increasingly require cryptographic proof that workloads haven't been tampered with.
And so this trusted launch is a critical part of providing that requirement.
Here's another scenario of DevOps.
Integration for trusted launch security is strongest when it's
automated and integrating Trusted Launch into the CICD pipelines make a
testation part of everyday deployment.
You can define policy as code specifying which images must be verified.
And then use automated attestation to validate each VM before
the containers are deployed.
And with continuous monitoring, the system can alert in real time.
If integrity drips apart, this moves the VM Trust from one time
check to a continuous process embedded in the DevOps culture.
The approach turns security from a manual check into an operational standard.
And it's a key part of what I've advocated, which is embedding
security in delivery pipelines rather than just an afterthought.
It's what hyperscale.
It's what enables hyperscale trust.
Now let's connect this to zero trust.
A principle summarized as never trust, always verify.
In cloud environments, that means no VM is inherently trusted, even
if it is on the same network.
Every request, every access, and every connection needs to be verified.
Traditional parameter security assumes the inside is safe.
Zero trust assumes breach.
It continuously authenticates and authorizes based on identity
and context and not location.
This approach greatly limits attack, service and insider risk.
There are three principles to bring zero trust to life.
Number one, identity aware microsegmentation.
These are policies that are tied to workload identities instead of
IP addresses, VMs communicate only with explicitly authorized services.
The second one is continuous verification.
Every access request is re authenticated in real time using cryptographic
identity and dynamic policy checks.
The third one is real time behavioral monitoring.
Any abnormal pattern or unexpected network con connections, privilege escalations,
triggers, and automated response.
Together, these mechanisms create a self defending infrastructure.
Now let's tie this back to practical benefits.
Lateral movement prevention, microsegmentation limits how
far an attacker can move.
If they actually try to get in, even with compromised credentials,
they cannot pivot across VMs.
Insider threat mitigation is a continuous verification and behavior monitoring,
which catches anomalies early.
For example, an admin accessing unusual data volumes at odd hours.
The combination of confidential computing, trusted launch, and zero
trust builds a multi-layered defense that reduces risk significantly.
Security is valuable if it's practical.
Security must also align with regulatory performance and needs, and this
is important for P-C-I-D-S-S 4.0.
These architectures help protect cardholder data and enforce
network segmentation for hipaa.
The TEE safeguard protected health information while it's being processed.
And for GDPR, they enable data, mini minimization and encryption
requirements through hardware isolation.
From a performance standpoint, TE overhead is modest.
It's usually two to 10 percentage, depending on the workload type.
And remote attestation adds hundreds of milliseconds, but it can be optimized by
caching results and selective TE usage.
And why does all matters?
It's because many innovations fail at compliance or performance stage.
By grounding my research in measurable impact, I've been able to influence
enterprise adoption strategies and cost models that prove security does not have
to actually slow down the innovation.
This pace is evolving quickly.
And I'd like to call out some key innovations as part
of this discussion today.
First post quantum cryptography, homophobic encryption, and
ML based anomaly detection.
The post quantum cryptography is a quantum resistant algorithm which
is being built into tease to future proof against quantum decryption risk.
The second one, the homophobic encryption allows computations on encrypted
data without even decrypting it.
It's still in the early stages, but this could enable
end-to-end encryption analytics.
The third one, machine learning models are now being used to detect zero day attacks
by analyzing the VM behavior in real time.
Now all of these advances will make trust, scalable and adaptive
as the threat landscape changes.
And finally, to summarize, our core challenge is scaling trust.
In a world where infrastructure is shared and the borders are flowed, confidential
computing, trusted launch, and zero trust, arens buzzwords, they are the foundation
of secure and resilient digital ecosystem.
Now this body of work from research to implementation represents how I have
contributed to advancing the security posture of global cloud infrastructure.
It's about enabling innovation without fi, and that's the
mission I continue to drive.
Thank you all for joining me today, and I hope you had a
wonderful, insightful session.
Thank you for your time.