Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hey everyone, and welcome to Con 42 Golang 2025.
I'm Banini, and today I want to take you on a journey through one of the
most promising areas in cloud security called confidential computing.
We'll see how it's not just a buzzword, but a practical production ready
approach to running secure workloads.
And how Go happens to be an incredible tool for building in this space.
Just to set the context, I am currently a senior software engineer
at Walmart Global Tech, where I work on scaling secure distributed platforms.
Most of my work sits at the intersection of cloud native infrastructure.
Security and performance.
I work with tools like Go Kubernetes and various encryption frameworks on a
daily basis, and that's why confidential computing caught my attention because, it
allows us to design systems where trust is built in and not just assumed and
go is a surprisingly good fit for that.
So let's begin by talking about why, this even matters.
So over the past decade, our systems have exploded in complexity, microservices,
APIs, cloud Edge, et cetera, right?
That came into, popularity and there has been a wide usage of
those in the enterprise industry.
But, with that growth, our attack surface has also, grown, massively.
So today the average enterprise sees a 2.5 x annual increase in attack entry points.
And here's the kicker.
Nearly 80% of data breaches involve privileged access misuse.
This means, attackers aren't always breaking in from the outside.
They're just misusing what's already trusted.
So with that in mind, traditional perimeter security like firewalls,
VPNs, IM, which is, which are widely used, today in all the enterprises.
enterprise orgs, it's no longer enough.
What we need is a way to protect data even when the infrastructure is compromised.
That's exactly where confidential computing enters a picture.
So foundations of these at the core of confidential computing are trusted
execution environments are simply.
Dees, think of them as a lockbox inside your CPUA place where
sensitive code and data can execute isolated from the rest of the system.
Three things make Dees powerful.
They are isolation, attestation, and encryption.
Sorry about that.
Let's talk about, each of them isolation.
Even if someone has, due taxes, they can't peek into the enclave
and coming to the attestation, you can prove that only trusted verified
code is running inside the enclave.
And lastly, encryption data remains protected at all stages, addressed
in memory and during execution.
So with all these.
It's zero trust implemented at the hardware level.
Now here's the interesting, interesting twist.
Why should we use go in this space?
There are many languages, available Java, scaler or any other languages
like Python, low level languages like c. C plus rust, et cetera.
So Dees are typically associated with C or rust.
that is, low level system languages.
But go is quietly carving an issue.
Y first go has a tiny runtime footprint.
This makes it ideal for running inside resource constraint
environments like Cleves Second.
Goals type, safety and simple syntax reduce the chances of memory,
corruption, or security bugs.
These are huge ones when working in trusted environments.
And goes concurrency model with go routines.
Let us write parallel workloads that are still lightweight enough to run securely.
So with all these, the balance go strikes between developer simplicity
and system level efficiency makes it a great fit for the domain.
So now coming to the performance breakthroughs, right?
One of the common criticisms of Dees used to be performance, and rightly
early implementations were slow back in 2018, as you see in the graph,
running code inside an ink leaf could add up to 35% performance overhead.
That's a lot, but just five years later.
We are down to around 4% overhead on modern hardware.
That's a game changer, right?
Combined with course efficient memory handling and over, low overhead.
It, finally makes sense, to that the confidential computing,
is practical for production workloads, not just any prototypes.
Coding inside ENC Cleaves.
So let's talk implementation.
How does this actually work in practice when you're writing Go code for Es, right?
The basic flow usually looks like this.
You initialize the enclave, which sets up a isolated memory space and generates
secure keys, and then comes attestation.
You prove cryptographically that the code inside is what it's supposed to be.
And once verified, you run your sensitive logic, maybe decrypting your data,
processing it or applying analytics.
And finally you seal the results, meaning, they're encrypted before leaving the ncl.
So ghost structure actually helps a lot here.
It's easy to build.
Clean state machines, modularize logic, and right secure test around these stages.
So enclaves and microservice architecture.
Now, not everything needs to be run in an enclave, right?
And that's the key.
So let's take for example.
In a microservice architecture, there might be many services, And
some might be, very sensitive data, which needs to be part of the Cleve.
And some should be fine.
even if it is, running outside and even if it is compromised, there is
nothing much that we would lose, right?
So keeping that in mind, in a microservice architecture, we
take our targeted approach.
For secure data ingestion, we encrypt data at entry.
And for sensitive logic, like working with PII, personal information,
we run it inside the enclave.
And supporting services can handle buying, blind processing
on enc, encrypted payloads.
And finally, the authorized delivery phase, decrypts results
only for verified consumers.
This modular security lets you protect only what's critical,
minimizing the performance hit while maximizing data protection.
Perfect.
So one question I've heard a lot, I've seen a lot in different
discussion channels around this topic is, can we actually integrate
this into DevOps pipelines?
And the answer is yes, and the tooling is maturing really fast.
You can enforce attestation checks before workloads are deployed, and
Kubernetes operators can now orchestrate NLA workloads using extended CRDs and
tools like HashiCorp Vault support, enclave packed secret distribution.
And with hardware support, secure boot tpms and remote attestation,
it's becoming feasible to run trusted workloads at scale.
So with all these, it opens up confidential computing to cloud native
workflows, and that's a major win.
And, not just tightly controlled environments.
So that gives us, the advantage over the cloud native workflows in
setting up seamless integrations.
And then let's look at a real world scenario now, right?
like analytics on encrypted data.
So the flow would look like, we ingest encrypted data, say
from users or edge devices.
The data flows into a TEE based analytics engine, and inside the
enclave data is decrypted analyzed and the results are sealed.
Again, only end points with proper identity and attestation
can read the final output.
What does this do?
So with this proper flow that we discussed here, it creates a
zero trust analytics pipeline.
Perfect for industries like healthcare, finance, or multi-tenant SaaS where data
separation and privacy are critical.
Alright, now, engineering challenges, of course, this, isn't a silver bullet.
There are challenges and those would be side, channel attacks.
So these are still a risk.
They require careful engineering and mitigation at both
hardware and software layers.
Then comes the remote attestation where it becomes complex across large
clusters or hybrid environments.
And then key distribution is especially hard when dealing with isolated nodes.
And secure channels must be established without exposing secrets
and designing minimal APIs between enclave and non enclave code is
critical to reduce data leakage risks.
So these, all, these are Aren easy problems, but the open source
community is making real progress and projects like, NA and grammar.
And secure key orchestration layers are paving the way for all this.
All right, so with all that, the next question, next question
comes where are we headed?
So the future is shaping up to be even more powerful.
So we are seeing multi-party computation that will let us analyze joint data sets.
Without ever exposing raw data.
And then we have confidential containers, which will allow us to deploy secure
workloads with minimal friction, and then specialized chips and TEE acceleration
will bring near native performance.
And all in all, zero knowledge proofs will let us verify correctness of computations.
Without revealing any inputs, and with all these go as a language is evolving right
along with it, it has the simplicity, speed, and structure to be at the heart
of trusted systems in this new era.
Great.
And with that, thank you.
I hope this session gave you a clear view into why confidential computing
matters and how go can be a key enabler of building secure systems
that are cloud native and future ready.
If you are curious, experimenting, or building something in the space.
I would love to hear from you, and you can find me on LinkedIn
at linkedin.com/in/banini.
Or you can scan the QR code, present here and you'll land
directly into my LinkedIn profile.
Thanks for listening and enjoy the rest of, con 42 Golan, 2025.