Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello, everyone.
My name is Mikkel Mark Heine and I'm here to introduce you to SpinCube.
SpinCube is an open source project which streamlines developing,
deploying and operating WebAssembly workloads in Kubernetes.
It is a collaboration between Microsoft, LiquidReply, SUSE, and Fermion.
Fermion is the company I represent, and I'm the head of product at Fermion.
SpinQube is part of the Spin framework, together with Spin.
Both of these are open source projects, which are in the process of being
donated to Cloud Native Computing Foundation, where they've both
been accepted as sandbox projects.
Spin is the development tool that you can use to build WebAssembly
microservices and web application using a serverless application model.
And with Spin Cube, you are able to run spin applications really
efficiently inside of Kubernetes because they're powered by WebAssembly.
So Spin Cube has already seen adoption where, for instance, size has been
using Spin Cube with Microsoft's Azure Kubernetes service to create
really fast, scalable, applications that can run a high density.
While still, making it more easy to operate these applications.
So they've actually done proof of concepts and have been running,
applications where they're able to put, to bring down the compute cost
with 60 percent without any doing trade offs in terms of their performance.
So let's maybe unpack how all of this works and what can
lead to these amazing results.
So to give you an overview of how these things fit together, I mentioned that
Spin is the open source project that you would use for developing applications.
Now, when you have a Spin application created using Spin, you can run
these things like Kubernetes using SpinCube as a free open source project.
Fermion also provides, some hosted, options using Fermion
Cloud that's fully managed.
And we also have an enterprise, a set of enterprise features around SpinQube
called Platform for Kubernetes.
Part of that being that you can achieve in higher densities of
what we have with Kubernetes.
But we're not going to get into that today.
We'll mainly focus on SpinQube as an open source project.
So SpinQube consists of four different pieces.
There is an operator to help, translate the application model,
or application framework that Spin is, into a Kubernetes, language.
So basically, Spin applications ends up running as, pods, and through deployments,
and having services, all these native Kubernetes resources that we all know.
But we have the spin operator sitting as a wrapper around all of this that
enables you to easily translate between the concepts you use in a spin application
and the concepts inside of spin.
It's actually optional to use the spin operator and you can just create
your deployments, yourself referencing spin applications as OCI images.
But as we start to unpack all of this, and you'll see some details in a live
demo, you'll get a better understanding of how these things fit together.
The runtime that enables us to run spin applications really efficiently inside
of Kubernetes is part of a container D shim called container D shim spin.
So basically, at the container D, the container runtime layer, install,
install the shim on your node that So we are able to cont, to tell container
D that in the case that we want to run a spin application, a web assembly
application, we are actually not gonna use your regular container runtime.
We are gonna use a different shim, for running the web assembly.
Which means that you won't see your regular C groups and all of that on
your nodes as you run the WebAssembly, and part of this is how we can get
to higher density in some of those scenarios inside of Kubernetes.
There's also the Runtime Class Manager.
Really, the Runtime Class Manager is here to help ease up the
configuration set of Container D.
So in particular, if you have Kubernetes clusters where you rely on auto
scaling, and adding extra nodes into your cluster, having a, configured
runtime class manager enables you to set up your nodes, dynamically, to
be able to run the Spin applications.
And the final piece of all of this is the SpinKube plugin.
So part of the Spin, developer tool, there is a CLI.
And basically the SpinCube plugin makes it really easy to transition
from local development, scaffolding YAML to pushing your spin
applications into OCI registries.
And we'll get to see that again once we get into the demo.
let's try this out and let's take a look at, how this all works.
And I'll start by basically creating a small application.
so let's transition over here.
Okay.
Let me just give you a few references as we go along, so you have those.
Spin Framework has its own GitHub organization.
we're still in the process of moving everything over from where these
things, were previously, as part of all this, the contribution into CNCF.
But you can see you have, Spin, the development tool, as one repository, and
then you have some of these components that I talked about, like the Spin
operator that contain a DCM Spin, Runtime Class Manager, and then there are various
SDKs for Spin that are also part of this.
So SpinFramework is a good place to go in GitHub.
SpinCube has its own documentation, which is spincube.
dev or website.
So basically spincube.
dev contains all the documentation you would need.
There are a few blog posts in here as well for certain scenarios.
And we'll get back to some of these as we walk through building
an application with SpinCube.
And when you need to build an application with SpinCube, you
need to go to developerferment.
com for now, which is where the spin applications are.
The spin application, sorry, the spin documentation is to be found.
And the reason why I'm saying for now is because again, part of the transitioning
and contribution into CNCF, these things may move around a little bit, but
Definitely stay on Spin Framework GitHub and we'll be able to guide you there.
You'll also be able to find both Spin and SpinCube channels inside of the
CNCF Slack, where you can reach out and interact with maintainers and other
people that are part of the community.
Okay, so with all of that in place, let's actually try and With this,
built in application that we can run inside of a Kubernetes cluster.
So the few things that I have prepared here is I already have a cluster running.
This is a local K3D cluster that I've created.
if you want to run this in Microsoft AKS, there is a marketplace offering
where you can easily configure SpinCube.
And on the, SpinCube documentation, there are a lot of, guides in terms of
how to get these running on, Kubernetes clusters and micro caves, Azure, Ranger
desktop, and just in general, how you can install spinQube with Helm.
And if we really quickly want to run through this, we rely on cert manager.
In order for Spin Cube to run and then we install the runtime class.
Previously K was operator.
that's the one the runtime class manager enable us to, to install container D.
And then, we can go and get some CDs in.
There are a few custom resources we need for the spin operator, the Spin Runtime
classes and the Spin app executor and.
Finally, there's a Helm's Yard to basically install the spin operator,
and I guess that should be it.
Now we got to the point where we upgrade and uninstall, but let's not do that now.
We want to try this out first.
So hopefully you get this idea that even though there are a few things
you need to set up, all of these are regular things you would expect from
Adding this kind of, frameworks and functionality into a Kubernetes cluster.
So I have already done all of that, and if I can find my, cluster again,
you can see, beyond CertManager, what I have is I have the spin operator
installed down here, and I also have, Jaeger installed, so I can, maybe we
can look at some traces as we go along.
Okay, so that's all that I have inside of my cluster for now.
so before we get into how this all works inside of Kubernetes, let's just
quickly create a spin application.
the way that you create a spin application is basically, there's a spin new command.
you use the template to create applications.
In this case, I'm using a template.
that will, that is set up to take HTTP requests and is written in TypeScript.
let's just call this HTTPS.
We're going to name the application CONF42.
And, let's go into the directory and let's take a look at what we have.
a few things got created for us.
Main thing is the spin.
toml file which describes our application.
I think the main thing to be, to take a note of here is you have this
concept of HTTP paths or triggers.
So there is a wildcard route, meaning all HTTP requests coming to this application,
is being handled by this one WebAssembly component here that is named CON42.
And CON42 is basically a WASM file, so a WebAssembly binary file.
I'm not going to spend time going more into the details of the
whole JavaScript to WebAssembly and how these other things works.
you can find a wealth of information on Fermion's Spintube.
Sorry, Spintube.
On Fermion's YouTube channel.
where, we have a lot of videos that sort of help you understand even the
WebAssembly part and all of that.
So if you want to dive more into the development pieces and how this all works,
I highly recommend go and check those out.
All that is just interesting, to know here is that Spin has a concept
of this application manifest, which is what I'm showing you right now.
the other thing that we have is then obviously our JavaScript.
And just for the sake of making this a bit easier to, grok, let's do this.
I'm just going to run an npm install, so that I get my,
all the, the npm packages that I need, in this case, I need this router.
to give you an idea of how these spin applications, like we talk about them as
serverless, but basically what this means is that we have a, an event listener,
in this case, an HTTP listener that you implement here, and in the case of
using, this idi router, But basically what we do is you can set up various
functions that reply to various HTTP requests based on the routes being called.
So basically in the case that we call a route, we're just
going to say, hello, universe.
Actually change that to CONF42, and let's make the capsulization of that correct.
So if we'll send an HTTP request to the route, we'll just say, hello, CONF, and we
will say, slash, hello, slash, something.
We will reply with hello to whatever we passed in on that path.
All of this is just stuff that happens in this IDI router, which is a JavaScript,
library that you can use, but I hope it gives you a sense that spin applications
are meant to, they're basically triggered by HTTP event, they're going
to do their thing, they're going to shut down again, and we will see how
we can use that in terms of scaling things and so on a little bit later.
Okay, so this is basically our application.
Let's try this out by using a spin build command.
So now what is happening is we're going to take the JavaScript and through a set
of build tools, we will have a WebAssembly WASM file coming out on the other end.
Which gives us a lot of the benefits of portability.
They're small, they're secure, because they're run inside of sandboxes,
and they start up really fast.
And we can see that if we do spin up, which actually loads our,
spin application, and now we can call the spin application locally.
And you can see, hello con 42, and if we do hello there, you can see hello there.
Okay.
So that's pretty straightforward.
We actually have a small, spin application now.
Powered by WebAssembly, it can reply to a few different, paths.
Okay, we want now to get this into Kubernetes.
usually what we would do with containers, and the workflow is similar, because,
really one of the core, objectives in terms of how we wanted to design SpinQube
was to just make it, make sure that WebAssembly became You could say a first
class citizen inside of Kubernetes.
So when you want to, which means that, when we need to distribute
this application, for instance, we rely on OCI to do that.
So if you have an OCI registry already today, that will be
compatible with moving the spin applications into Kubernetes clusters.
And you'll see that once we get a little bit further down in terms of
how the CIDs and all of this work.
Okay, so now we want to use the SpinCube plugin, so I just want to
make sure again that I build the latest version of my spin application.
I'm just doing that because I've sometimes forgotten how to do that, forgotten that
I wanted, sometimes I forgot to do that.
Okay, then we're going to use a command called spin registry push.
So basically now I can push the spin application into a container registry.
And I am going to use, let's do this, I'm going to use TTLSH.
This is an ephemeral publicly available container registry or OCI
registry, which is really nice when you want to do demos like this.
So what is happening now is Spin is going to take my WebAssembly application,
the manifest that were created, and any configuration that I may have, package
those as three individual layers into an OCI image, and push it to this registry.
What's really important here is that's the only thing we're pushing.
we're not, We're not creating filesystem, whole layers of filesystems.
Dependencies are not brought in because they're already compiled
into the web or simply binary.
So if we go ahead and actually inspect the image.
That we have inside of, inside of, our registry, we should be able to see that
the only thing in here, I'm just going to move up so you can see, there's a
bunch of annotations, oh, I even got my name in there, which is nice, and
you can see there's a small config.
json, which is 209 bytes, then we have our WebAssembly, which in this case is 12
megabyte, we can strip and we can compress this even further, but it's only a 12
megabyte binary that we're moving in and then some, the actual spin application,
manifest as well, which is 534 bytes.
That was the toml file that I showed you.
that's really all that there is there.
there isn't a bunch of layers with, with, dependencies and other things
that I need to bring us along.
And this is all that we're going to move around.
And this is one of these things where WebAssembly is super interesting in terms
of having smaller images, and having, fixed binaries that we actually move.
Okay, now the next thing I'm going to do is I'm going to use, command
to easily create my, my deployment.
So in this case, I'm going to say, spinkube scaffold, so basically scaffold
the YAML that I need, use the container image, path that I just created, that I,
which I pushed to and put this in an app.
yaml file.
So if we look at this file, you can see we have a Kubernetes YAML where
we are using the spinkube dev API.
So this is part of the custom resource that are being created as we deploy it.
we created the custom resource before that this is part of the whole
SpinQ project, where we actually have a SpinApp resource in here.
this very much looks like how you would create a, container, or deployment.
More actually, in terms of we have replicas in here, and I think I mentioned
earlier on, you can consider this spin app resource to be a way to try and
easily translate what we saw in the spin.
toml file, because we'll do that a little bit later by adding variables in here.
there is a, there's an easy transition between these two, but they're all end
up just being deployments and containers.
deployments and pods, sorry, containers.
Okay, so now that I have this, let's move over and give this app a
YAML file to a Kubernetes cluster.
And what we'll see is, first of all, we can go and take a look
at the spin app, where we now have this con42 spin app created.
So this is the spin app resource I'm looking for.
We want to have two, and we can see two of those already now.
we can see the description in here, basically.
What we expected, or what you can also see is there's actually a deployment created.
So again, we're creating a real deployment.
We could have chosen to really just create the deployment if we wanted to.
But that sort of takes the spin operator out of the picture.
And there are various things that we use the spin operator for.
For instance, here I'm able to have a cluster wide OTIL endpoint.
That I create, which means that I now have a Jaeger instance in here.
So all my spin apps will automatically adopt that hotel configuration.
So there's some benefits you can get from all of these.
so yes, highly recommend using the operator and using the spin
app resource for doing this.
we also have a service that's set up for us.
So what we can try and do in here is we can go ahead and
port forward to our service.
Let's do 80, 80, because.
That's the way, that's what I normally do.
And if I don't, something will go wrong.
We've now created this port forward, which means we can go back here, and we can then
curl to 8080, and you can see helloConf, and we can do hello there, and I just
need to get my spelling There you go.
And now we have hello there, and hello here, so on and so forth.
We can see this all works as it works locally.
So that's it.
We've actually taken a spin application that we created, from
scratch, used TypeScript, and saw we can run this locally, easily,
and compiled this into WebAssembly.
We created an OCI image that we pushed, created the spin
app resource inside Kubernetes.
And there it is, and there it is running.
Let me go back and just, I want to do, let me check.
Oh, we can see there are actually two pods behind this, right?
You can see there are the two pods being created in here, and this is also
where we can go ahead and find our logs.
There's an open telemetry error.
Let's just not think about that, but you can actually see that we
did hit the, we did hit the spin application in one of these pods.
I think one thing that I wanted to show you around the Jaeger instance is that I
have Jaeger set up in my cluster as well.
And if I go ahead and just look at, something that Spin is doing for me,
we can see that, not that long ago, we were actually hitting the Spin
application, and you can see we have the whole OTIL configuration being in here.
We were getting the, there was an HTTP GET request.
for the root and what happened is that SpinCube then executed this
awesome component called Conv42.
So just to give you an idea again of how much work that's actually been
put into make it really easy to give a lot of the stuff that you say, these
are just like table stakes today.
You want to have a hotel, you want to have all of these things, and it's really
easy to get this set up and running with SpinCube and your Spin application.
Okay, moving ahead from that.
actually, let's just do a quick recap of what happened here.
So we did spin new to create a new application, spin
build, created a WebAssembly.
Registry push this into OCI.
You can use any OCI image you want.
If you have a private registry, which I hope you have, I think it's spinregistry.
login.
And then you would basically be able to log into that before you push.
And all of these can easily be run in a CI setup as well.
There are actually GitHub actions that exist out there to do
all of these things with Spin.
And then we did kubescaffold as a nice little tool to scaffold our
YAML and applied that to our cluster.
What happened when we applied it to the cluster?
the resource was being put into the API server as a, as
a spin app custom resource.
The operator picked that one up, created deployments and services, and
now we have that spin app running.
in a pod, but in the pod, the runtime class managed to make
sure the container DCM is there.
So we are actually executing using the container DCM spin and not
using the regular container runtime.
And, and yes, that's it.
I was just thinking about how we can actually, I don't think there's
an easy way for me to show that.
but that is how that works.
I think what we can see, actually, if we did a, if we looked at the
actual part, we should be able to see in here that, lemme check.
You can see the runtime class.
We use this wast time spin we two.
And basically that runtime class is what informs container D that we want to use.
The spin runtime wasnt Time is part of, is wrapped inside of spin, but
we only use the spin, V two version of the runtime to actually run this.
It's not being run, in a normal container.
Okay, let me go back and showcase another scenario we have.
We have this application now, and let's just try and see how this,
again, how having an application like this, using something as simple
as providing a variable to our application, how all of that works,
with a regular, Kubernetes, workflow.
What I would do in a spin application to begin with is that I need to
add a variable into my, spin file.
So basically what I'm going to do in here is I'm going to say that I have
a variable, for my spin application called myVar, and there's a default
value to that, which is cont42.
Then the other thing I can do inside of spin is I can say as part of my component
down here, so the Hello, it's not called HelloCount42, it's just Count42.
Part of this component variables, there's a variable that I want to pass on to my
component, and the variable that, or the string that I want to use for this, comes
from the MyBar that is created up here.
So basically what this enables is inside of Spin, I can now go to
my file, and I can do two things.
I can import, if I get the import correct, a way to get variables using the spin SDK.
And then the other thing that I've done in here is I've just
prepared a small piece of code.
Let me do this, and I think I will need to do an npm install to get the package in.
And let me just check what's going on, get rid of that one.
Okay, so what I did in here in my code, iterated as I eat this, I imported the
variables from the spin SDK, and now I'm setting up another path in the router
saying if someone calls me on slash bar, I will basically private my bar is, and
then I'm going to get my variables from the my component bar, which is the one
that we just added into the tomo file.
And now that we have this created here, let's go back and do a spin build.
We're going to rebuild our application
and then we can do a spin up again.
So basically run this application.
And if we.
let's do 3000 and then var, you can see we get the default value, which was cont42.
I can also set this var by using spin underscore var basically.
If I set that environment variable when I run spin up, I've now
provided, the variable and you can see my var is now something else.
That was the local story, to get an idea of how these things, work together.
Now, what I want to do is I want to push this into my cluster, so I'm just going
to make sure I have the latest build.
Once I have that, we can do a registry push.
Let's do version 2.
So we're going to push the new compiled WebAssembly, the new, spin.
toml that still has that default value in it, CON42.
but what we will do is we'll just bypass that for now, and then we'll
actually provide a configuration.
through the, the app YAML that we have.
So we don't need to create this again.
We can basically just go in here and we can variables as easy as, oops, as this.
So let's say we have variables.
And we have a variable called myvar and oops, I just want to
make sure we get indents right.
Let's say the value is kubernetes because we're in kubernetes action.
Let's say the value is spinq.
that's more nice than what we're doing here.
you need to remember to update the version.
Okay, so basically we now have updated the spin application, the spin app resource
definition to pull in the new version of the image, and we provide variables, so if
we do buy this, we can go back and we can see that we have new containers creating,
and we should soon be able to see that it.
The previous versions being taken down and so at least I think we have,
this was the new version, let's check, yeah, we have one which is version two
of our image and we should have two, oh, that was refreshing, yes, we now
have two of the new versions running, so we got those quickly deployed.
let's go and check if our forward is still set up.
It's not, let's do this forward to the service once again, and we can go over
here and we can now curl an 8080 and we would expect to see five hours spin queue.
Okay.
So that was one scenario where basically, again, wanted to show how,
variables can be added and other things.
You can obviously have the variable being pulled from a secret.
if we take a quick look at the API documentation.
for the spin app, you can, actually, instead of doing that, let's
just look at an example because I think that's more interesting.
it's easier to see.
assigning variables.
in this scenario, basically, this is part of the spin configuration for
the example that's going through here.
what we did in the spin.
toml file.
But basically, what you can see down here is, how you can use both config
maps and secrets to provide this.
And then this would be what the spin app would look like.
So you have, this is what I did, an inline variable, but we could also
have the variable be, from a config map reference or from a secret reference.
And all of this would look very familiar to how you would do this normally in
deployments, when working with containers.
regular containers inside of the Kubernetes cluster.
Okay, let's actually do check on this one because we did see, you can see
this was the trace from where we were actually using the get, so you can
see spin is fully, instrumented and then we can see we're actually calling
another, method inside of the spin SDK to actually get the variable.
which was pretty quick in terms of how that variable's gotten.
And if we want to dive even further into this, because why not?
What we can see is that the way that all of this sort of ends up
being set up is that the actual pod, I am, let me just check.
We're looking at the pod right now.
The actual pod has the, let's see if we did the YAML.
The actual pod has that spin variable myvar set up as an environment
variable, which is the same way that I got this environment, this variable
injected when I ran things locally.
So again, gives you an idea of how the operator works and what value it
adds inside of the whole stack here.
Okay, let's try something a little bit more fancy because part of the, part of
the benefits we get from using WebAssembly rather than containers is how quickly
they can scale because the small size, how quickly we can pull them in, but also
how fast they will actually start up.
so a good scenario would be something like combining this
with, scaler, inside of Kubernetes.
So what I will be doing now is I, let me just check because I think, yeah, we
already have KEDA running in our cluster.
I think what we can, so what I will do is based on the tutorial
that you're able to find.
over here in the SpinCube site, which is, called Scaling SpinApps with
Kubernetes Event Driven Autoscaling.
So basically using KEDA.
this is an example that walk you through how basically there is an application
that, that creates a lot of CPU load.
But obviously using KEDA, you can scale based on CPU load, you can scale based
on observing, queues or other things.
I think even object stores, there's a whole bunch of stuff that KEDA can do.
but again, because everything at SpinCube is so integrated into how
Kubernetes works, most of the Kubernetes, ecosystem and most of those projects
that are out there will actually just work out of the box with SpinCube.
so what I've done is I've created, I've added KEDA into my cluster right
now, and all of that is something you can do inside of this, this article.
So what I'm going to do now is I'm just going to apply this, KEDA scaled object.
Basically, this is KEDA's way of saying, Oh, actually, this is the application.
Sorry, I'm going to add the application to begin with, and then I'm going to
create a kitter scaled object, which basically is the one that is monitoring
the spin app, changing it between one and 20 replicas based on CPU
and based on 50 percent utilization.
So that's the threshold for when we add another replica
when we take down a replica.
So the few things that we need to do here to make this
work is to go to our terminal.
And first thing we're going to do is we're going to apply the.
The sample application and the next thing I am going to do is I am going to.
By the scaled object, so if we quickly go over here, we can
see we now have a Keda spin app.
we actually had minimum one.
I believe we could change this to zero.
We'll try that.
We have outer scaling set to true.
readability curve is one.
I think we would have to Whoops, we got stuck.
Let's get out of here.
this one says minimum one.
Anyways, let's not do this now.
Maybe we can play around with it a little bit later.
I just got myself all excited about whether we could actually, whether
we could actually do that or not.
Anyways, let's go back and monitor the spin app.
We have that here.
I think if I do ctrl R, that will be refreshing.
So we can see we have one of the Keda spin apps running already,
translated into one part.
So what I'm going to go do now is I'm just going to create some
requests to the application.
Before I can do that, I need to set up a port forward.
So let's set up a port forward to the service.
let's do 8081 and back and take a look at the spin app.
We have it and then we can go over here and now we have load and we should fairly
quickly start seeing that more parts needs to be spin up of the data application.
Let's go and take a look at The pods over here, we can see CPUs at 429.
Oh, there you go.
Okay.
Now things are starting to happen.
We got a bunch more of the cater apps.
and if we come back, we can see we're now at 4.
And we should gradually see this changing.
Let's just go back and stop some of these HTTP requests.
we got nine responses back.
This is not a case of performing.
This is just a case of showing how these things work.
And you can see that this all works.
We got extra parts.
We got more of these spin apps running, inside of our Kubernetes cluster.
And I don't quite recall whether the app will actually ever stop
again or will just keep increasing.
but at one point we should see that the number would cool down and we'll then
have a fewer of the spin application instances of the web application.
that was a few examples of how you can get started, looking at, using variables
and other things, and also how this works really well with other projects
like OpenTelemetry Stacks, Jaeger in this case is what we saw, we saw, and
even Kubernetes Event Driven Autoscaler.
we might actually see, yeah, this is the CPU load, but we can see a
bunch of those coming in as well.
I don't know whether those will actually ever complete or not.
a few resources to go for, spincube.
dev.
And you can get to learn a lot about SpinCube.
developerfirmian.
com is a good place to get some spin documentation and Spin Framework
on GitHub is the place that you go and take a look for this.
Okay, so hopefully along the way I've, you've got the idea that,
SpinCube like just easily flows into your regular, way of working with
Kubernetes and your workflows you have around CI, CD and all the other
things you can do around Kubernetes.
There's obviously the question of why do we need spin WebAssembly
applications in Kubernetes.
And I think there are, four, four very particular things that you want to,
want to consider, and why I think this is super relevant, and why I think this
is a way that we can, bring even more value and then so on into Kubernetes.
first of all, as I showed you, the spin applications are really small.
So comparing to a lot of containers where you potentially bring in a lot
of dependencies and all of these, you have a single binary that needs
to bring and you potentially have some configuration and maybe you have
some data that you bring as well.
But a hello world spin application that's not written in JavaScript,
but written in Rust, could be as small as a few hundred kilobytes.
that's the OCI artifact that we need to juggle around.
So this gives us a lot of great opportunities in terms of, if we need
to run these things and we want to use Kubernetes as a workload manager or
scheduler, even in environments that have little compute capacity and power,
this is something that can potentially make that work with Kubernetes, so
you don't have to rely on containers.
Another part of this is that the WebAssembly components start
in less than a millisecond.
um, the way, actually the execution model, we haven't really talked about
it, but the execution model in this, in, in the world of, of spin and
WebAssembly is that the component, and maybe we want to go back and look at
this file just to give you a, you have.
What I mean, let's look at this.
So you can see that there is this, the way that a spin application is
defined in here, we have this idea of, or this concept of a component, right?
So we have an HTTP trigger that maps to a component.
The component is the WebAssembly file.
So I can easily create applications where I have multiple of these components.
So each WebAssembly file represents, a different path on the route or
a different set of functionality inside this application.
Whenever a request hits our application, what happens inside of the runtime is
that the WebAssembly binary is loaded.
The request is handed over in memory from the host to the guest.
The guest in this case being your application.
It's handing the request and then we're unloading the WebAssembly module again.
Which means that if you had four or five different WebAssembly modules in
here, for each request, only the mod, only the modules that needs, or the
components that needs to be loaded are the ones that are going to be loaded.
And that we will use, for this, which also means that as soon as the work
is done, the WebAssembly is unloaded.
So it's only the host runtime that takes up memory on your Kubernetes node.
And this is where the density that we can get from running WebAssembly
actually comes from is that there's a very low, minimum, memory
assumption that is taking place.
And there are even ways, and this is where some of the enterprise.
Add ons that we built at Fermion come into place where we can get
fully rid of the memory footprint.
so basically truly scale to zero, as part of, as part of
this, as part of how these run.
But that's another very, important thing and a lot of value that I think
SpinWebAssembly applications will bring into Kubernetes through SpinQube.
And there's the third thing, which is that sandbox.
So this is the security part of this WebAssembly component is a sandbox
and is denied access to resources on the system, which means that it
does not have access to file system.
It does not have access to memory outside of its own memory, linear memory that's
provided by the host at, when it's loaded.
Also, you have to explicitly grant access to HTTP endpoints or other network
endpoints that you want to access.
So there's a way inside the spin.
toml file where you provide access to particular API endpoints, whether that's
a SQL database, a Postgres database, a Redis endpoint, a Valkey endpoint.
an HTTP endpoint, you have to do that explicitly.
So this is a nice operational runtime decision that you can make
of, which endpoints are available for the actual application.
And finally, you have the portability, cross processing, architectures and
operating systems, which means you can swap out the underlying node processing
architecture without having to produce separate pipelines and deploy artifacts,
meaning that the same OCI image Could run across X 64 and ARM 64 devices.
so if you're able to use some of those in your at cloud environment
or other environments where you run, Kubernetes, you don't need to
deal with multi arc images and all these multiple build pipelines.
The same Web MD binary, we actually.
In one compilation would run across all of these processes and architectures.
So again, it dramatically simplifies, I guess the scenario would be if you
have a workload where you can utilize things like spot instances, or what
they're called in cloud, you can really subscribe to spot instances provided
as ARM instances, x64 instances.
There's a lot of opportunities there to also have some cost savings associated.
So with that being said, I hope this gave you a good idea of what
SpinCube is, how it works, its relationship to Spin, and some of the
benefits you'll get from using this.
So I highly recommend go and check out spincube.
dev.
Spin documentation is on firmware.
com slash spin for now.
And thank you so much for listening.
Have a great day.