Rust in the Cloud: Building AWS Applications That Scale
Video size:
Abstract
Want to leverage Rust’s performance and safety in AWS? This practical session demonstrates how the AWS SDK for Rust enables developers to build reliable, cost-efficient cloud applications using familiar patterns.
We’ll explore the SDK’s modular architecture, individual service crates for S3, DynamoDB, SQS, Lambda, and more, all powered by code generation. You’ll see how unified configuration through aws-config handles credentials across profiles, SSO, IAM roles, and AssumeRole scenarios. The SDK’s async-first design with Tokio integration provides built-in retry logic, pagination, streaming support, and resource waiters.
Through real-world patterns, we’ll map these capabilities to common architectures: file processing pipelines with S3 + SQS, serverless APIs on Lambda, containerized services on ECS/EKS, and high-performance data access with DynamoDB. We’ll cover observability through tracing integration, security best practices with least-privilege IAM and KMS, plus deployment strategies for creating small, fast binaries.
Key takeaways: Understanding when to choose which AWS service, how Rust’s compile-time safety prevents runtime surprises through strongly-typed builders, and practical techniques for building production-ready applications that take advantage of Rust’s performance characteristics without getting lost in lifetime management.
You’ll leave equipped to build scalable, reliable AWS applications in Rust, focusing on cloud architecture patterns.
Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Welcome to Con 42, rust Plan 2025.
My name is Vin Kana and I'm thrilled to be here to talk about something I have been
personally passionate about combining the power of Rust with AWS Cloud services.
Over the next several minutes, we will explore how rust performance, safety,
efficiency, make it a great choice for building cloud native applications
that scaled reliably instead of just theory, we'll walk through our flowing.
Example system step by step.
So you live with a clear mental model of how this can apply to a real world.
Workloads
when you build on AWS scale is always central applications may
need to handle millions of events, thousands of concurrent requests
and unpredictable workloads at that level, every milliseconds of latency
and every MB of memory counts.
Rust combines predictable performance similar to c RC plus with compiled and
guarantees that eliminates common runtime issues for Lambda functions or ECS tasks.
Small binaries reduce cold start star types and lean memory uses cut cuts costs.
This blend of performance and safety makes rust an excellent choice when you
care about efficiency and reliability.
Let's compare rust agonist to other popular languages.
In AWS Go is a strong for Lambda and containers, thanks to go routines, but
it, its runtime is bigger than rust.
Java has massive ecosystem, but coal starts and memory overheads are costly.
No JS offers fast development, but struggles with CPU bound workloads.
Python is unbeatable for prototyping and ML libraries.
Though it slows down at scale, c plus delivers raw power, but manual
memory makes it error prone rust balances performance near c plus with
safety and compiled and guarantees that combine that combination makes
rust stand out for cloud workloads.
Where does rust shame for Lambda?
Small binaries equal for or equal faster cold starts.
The borrow checker enforces memory safety, preventing bugs before develop deployment.
With Tokyo, you can run thousands of async calls to AWS services
without overwhelming threats.
That reduces CPU and memory lowering costs In loan Lambda, ECS and EKS, strong
typing issues, requests are valid at compiled, preventing misconfigurations
from ever reaching production.
But rust isn't perfect.
Compiled times can be long.
In large projects, ownership and lifetimes have a steep learning curve.
The ecosystem is maturing, but Java and Python have more polished libraries.
Tooling for Lambda is improving, but Python and Node still offer more
templates and rust favors explicitness over dynamic meta programming.
Which is safer but less flexible for quick experiments.
Being aware of these trade offs makes adoption decision realistic.
The A-W-S-S-D-K for Rust provides strong support.
Each service is its own create, so you input only what you need.
Shared configuration handles, credentials and regions securely built on Tokyo.
It's first making it.
Efficient for input output.
With retries, ation and streaming built in, developers can focus on
value rather than boiler plate.
To make this concrete, let's build a file ingestion pipeline.
A user uploads to S3 and even flows through event bridge into SQS.
Rust worker workers process, it DynamoDB storage results
and observability keeps watch.
This example matters.
Real world patterns and highlights.
Russ value at each step.
Step one, upload files.
With Russ STK, you gain compiled time.
Assurance that requests are valid, I think lets you upload many files
concurrently without blocking.
This builds confidence that what you send to AWS S3 is correct and efficient.
Next triggering process.
C, even Bridge sends S3 events into SQS.
Trust workers consume messages safely DC realizing them with third, A sync
workers scale out efficiently, letting you process hundreds of events in
parallel without adding threats.
This is where Rust, A sync runtime shines.
For compute, we'd apply to Lambda Rust.
Binaries are small, giving faster cold stocks.
Execution is predictable and efficient.
With As think a single function can manage multiple tasks compared to Java or T
net Runtimes, rust Lambda are lighter, cheaper, and faster After processing,
we persist results in Dynamo DB Rust.
Builders enforce correctness at compiler as think queries let you
scale to thousands of operations.
The system stays responsive.
Even under heavy load observability is essential.
With the tracing crate, you add structured spans into a sync workflows
pair with open telemetry and metrics stream into CloudWatch for errors.
Anyhow, are the errors capture detailed context together, this makes
debugging faster and operations.
No, AWS system is complete Without security and deployment use IAM
with the least privileged role.
And KMS to encrypt sensitive data for Lambda Cross compiled to muzzle for
small static binaries in containers.
Multi-stage builds, strip unnecessary dependencies, shrinking image
size, and improving security.
To recap, rust compile time.
Safety prevents runtime bug, I think first makes it perfect
for input output workload.
The modular SDK is easy to adapt and end-to-end patterns map naturally
compared to other languages.
Rust blend of performance and safety is unique.
In closing, rust is ready for cloud.
It combines system level performance with strong safety guarantees, makes making
it ideal for AWS workloads at scale.
The SDK provides the tools and our example showed rust across AWS landscape.
If you're building next generation cloud native apps, rust is not just
experimental, it's production ready.
Thank you and I hope this inspires you to try Rust in your AWS projects.
I.