Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, and thank you for turning in.
I'm m Pravin Kana, a senior software engineer.
In this session, I will walk you through a simple but powerful idea.
What if you could define and run an entire quantum experiment
from a single JSON file?
No more learning notebooks.
Updating brittle scripts or writing pipelines by hand.
Instead, we'll explore a schema driven approach that helps researchers move
faster, build repeatable processes, and gain built-in observability.
This is all without writing front end or orchestration code.
Quantum research moves fast, new hardware circuit designs.
Error mitigation techniques emerge constantly, but our tooling often
lags behind copying notebooks create fraile dependencies.
Document documentation drifts as experiments evolve and auditing
becomes nearly impossible.
This slide captures the problem, constant change in parameter and hardware, hidden
dependencies and logic buried in scripts.
Inconsistent documentation and a series serious compliance
headache for regular industries.
To solve these challenges, we built a schema driven framework.
At its core is a single adjacent schema, a machine readable contract that defines
what an experiment can and cannot accept.
From this schema, we auto generate three things.
First.
React based waveform.
So scientists never touch HTML or JavaScript.
Second, a step functions workflow to execute circuits and
post-process data, all serverless.
Third, a lineage tracker.
Every run is tagged.
Versioned traceable.
This means researchers are just a manifest, not a monolith of scripts.
Here is how it all fits together.
The Chason schema is stored in AWS S3 with object lock enabled, ensuring
it is versioned and also immutable.
A-C-I-C-D pipeline triggers two generators.
One builds the react portal, the other compiles a step function state machine.
The state machine calls Amazon Rocket.
To run quantum jobs, invokes for gate containers for post-processing and stores.
Results in S3 observability is handled by open Telemetry
with the dashboards in Grafana.
This system is modular, serverless, and also designed for scale and traceability.
On the right side, you can see the architectural diagram
and the components involved.
Let's compare traditional workflows with our schema driven approach.
On the left is a notebook model.
Everything is hand coded.
The device on circuit chart and output path, it works,
but it's Frazy and error Pro.
Also, every time some parameter has to be changed, this notebook
model has to be updated.
The right is a compact J in schema, 30 lines that describe
the entire experiment set.
The schema is used across the ui, the workflow, and the validation logic.
Instead of cloning and modifying notebooks, research, fill out a form
that is guaranteed to be a valid one.
This approach unlocks four big benefits First.
Parameter exploration becomes safe and structured.
No invalid values, no runtime surprises.
And the second one, collaboration improves Schemas are version in git
with pull requests and differences.
Number three, configuration errors go to chiro because
schema validation runs upfront.
Four.
Onboarding is dramatically faster.
New team members don't need to learn your SDKs or internal scripts.
They simply open a form and run.
You get governance without sacrificing speed
to.
To test this in real world, we ran a 1 million shot VQE Sweep.
This bar chart shows the results across five metrics.
Let's walk through it.
On the far left set up time, the time to prepare a single parametal variant
dropped from 18 minutes with notebooks to just three minutes with a schema portal.
Next is wall clock time, total runtime for the entire suite, which
dropped from 13.4 s to 11.8 hours.
That improvement comes from better concurrency and simulator fallback.
When QPU queues are logged.
Third is a configuration errors.
We went from seven errors in the baseline to zero, which is a huge win.
Then cost.
The schema workflow saved around 200 USD largely due to more efficient
execution and fewer retries.
Finally, learnability researchers rated the schema portal 4.8 out
of five compared to 3.1 for the.
Notebooks here, the learning is basically straightforward for the researchers.
These aren't synthetic benchmarks.
They are based on real world jobs running on a bracket with simulator file
back and post-processing in Fargate.
From
an engineering standpoint, this system is compact and maintainable.
We use three CDK stacks.
One for frontend portal, one for the state machine pipeline
and one for observability.
The schema file itself is tiny, about 3 75 bytes.
The React bundle is 1.2 kb after it's jipped.
Using the GGP, the state machine is six KBG sn with.
23 states fi service integrations and no Lambda functions.
We test locally using local stack.
They apply with GitHub actions and run cost effectively within AWS free
tier for most dev and test scenarios.
This is just the beginning.
We are expanding in three major directions.
First.
Enabling multi key MA chaining.
So teams can compose multi-stage pipelines where output from one
experiment feeds into another second, I'm adding post quantum encryption.
The results will be automatically encrypted with VER aligning
with net post quantum standards.
Third, we are making workflow cloud portable.
We will support backend Azure Quantum, Google Circ using the same schema and ui.
This will let researchers focus on science, not the infrastructure.
Thank you for spending time with me today.
Schema driven experiment portals bring structure automation
observability to quantum research.
Without slowing scientists down, you gain auditability, cost
control, and repeatability while freeing your team from code.
If you're working on quantum workflows, I would love to connect and share
con code patterns or blueprints.
Thanks again and enjoy the rest of Con 42.
Quantum Computing 2025.