Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Wouldn't it be nice if we could shift left on cloud testing and shorten
the development loop from hours to seconds without having to wait all
the way to the end of the pipeline.
during this section, we're going to be discussing cloud testing without
the weight and how to transform the Kubernetes developers experience.
A little bit about me before we dive in.
I am Anita Ihuman.
I'm a developer advocate at MetalBear, where we're building a cool tool
called MirrorD that aims to make developers life a lot easier.
I like to think of myself as an open source fine girl, and that is because
a lot of my work revolves around.
Open source tools and open source project.
I am a member of the chaos board where we focus on improving open
source community health and analytics.
And when I'm not doing open source work or advocating for, development tools,
you'll find me organizing local community meetups at KCD Nigeria and CNCF Abuja.
Now that we have that out of the way, let's look in.
During this section, we're going to be looking at First, the traditional
development workflow, then we'll move on to talk more on what's shifting
left on cloud testing is all about.
And obviously we're going to talk about the benefits of
shifting left on cloud testing.
And if the guts of demos allow us, we'll look briefly at a demo
of how Meridi works in action.
So let's get in.
First off.
Let's think about the traditional development workflow for a second.
As developers, ideally, you'd want to just, write your code base, right?
And then once it looks good, you create a pull request.
It runs through the automated test and the process just goes on smoothly.
you quickly iterate whenever a fix comes up and then you deploy it, right?
However.
That's not often the case because, with cloud based applications now, instead
of just writing our code base and then, making sure that it goes through the
pipeline without any hassle, we find ourselves like wrestling with adjacent
issues where you write the code base, but then you also need to build the
code base and then ensure that your code functions correctly in the containers,
and then you also have to like.
make sure that this code runs in the clusters and then it plays nicely
with like other containerized apps.
And then it's, finally, tested and then deployed to production
without breaking anything.
Now we see that cloud testing enters the picture, like at the far end.
Of the, pipeline, right?
And you might think that is not so much of a big deal.
However, the issue with this, particular workflow is that since we're dealing
with cloud based applications, which are often complex in nature,
we're having up to hundreds and thousands of like micro services.
And these micro services require like a lot of like resources.
that developers are not subjected to dealing with, the difficulty
debugging these applications.
And in the case where like testing, especially against like production
states is at the far end, whenever those issues that reproduce themselves
in the cloud or in production.
I'm not notice at the initial phase, the developers will assume there are
actually no complications and the application will go through all of these
phases without having to deal with that.
And you find it's at the end of the pipeline, then only to realize that
there's a broken, There's an error somewhere, or there's something
that is broken in the production state that needs to come back.
And then you need to go through this entire workflow that we just looked at.
And that can really be very, difficult to fix on its end because it could
take hours and two days from the developer's time, depending on how
complex these applications are.
And I know that so many persons are probably saying that we have mocks on,
our team has mocks that we offer you.
So that's not a big problem.
However, the issues with these mocks.
are that they only give you like some opt, some optimal conditions
that you're testing against.
And because this marks give you like an implicit assumption of what the
production states would look like.
And you still have to deal with like configuring and like managing the smocks.
So first off, you're not getting the a hundred percent assurance that your
application would function this way in production, but then you also have
to manage the workload of configuring and then managing, maintaining these
particular mugs and all of that is on the developers, which adds like
additional complexities to the developers.
And then, let's not talk about like how this plays a huge role
in like the productivity of the developers at the end of the day.
Cause now they have to deal with the endless build push test cycle, which
affects like the overall, development loops and, affects their performance
and productivity on the long run.
And obviously we can leave out the fact that this also like.
increase in cloud costs because so many organizations today have
opt for like remote, development environments where each developer
is assigned their, environments that they have to walk to work with.
And if maybe you're working with a team where you have
like hundreds of developers.
Each individual developers has like their own assigned environments
that they're making their changes and doing all of these changes in.
And that alone takes up a whole lot in terms of paying for this cloud, resources
that you're actually, subscribing to.
And, You also have to think about the fact that by the time all of
this is done, the application is taking longer to get to the market.
Since there's a lot of like rework that is going on in a process back and forth,
the application takes a longer time to actually hit production like that.
You would anticipate it to.
no wonder a research by the national institution of.
National Institution of Standards and Technology states that the longer it
takes for testing and also identifying and then resolving issues, the more
impact it actually has on the, the costs and also Security of the application.
And what this actually tells us is that it's a lot easier to detect issues, right?
When the code, when the developers are still writing the code base or
when the developers still have much control over the code base compared to
when the code base is, Like at the far end of the pipeline, so it's easier
for them to, identify the issues.
If you have access to that, production, base environment, it's easier for
them to tackle these issues without having to go through all of this
chaos that they could deal with if it's tackled at the initial state.
And then.
Once the software is like in the testing phase, Reproducing certain, issues in
the local environment tends to produce additional complexity because now you have
to like, think of, how to, reproduce that particular state in the local environment.
And so many, developers now have to like, if maybe the application or the, team is
working with like local based, development environments, the applications might
not actually, Their software may not be able to handle all of the data, the
environmental variables and the resources that the application in production would
need, which also adds like another level of complexity to the whole process.
Additionally, why it's easy to catch certain, issues or setting, bugs
during like development, right?
Because, your application or you're not testing against actual production state.
It's still difficult to identify these issues.
And, like I said earlier, the mocks that a lot of organizations are using today
cannot actually mimic the databases, overall database, environmental
variables, and also resources that this application would need.
100 percent in production still leading us to So many,
questions at the end of the day.
Now, this leads to why we actually need the shift left on, cloud testing, first
of all, the concept of, shifts left testing on its own is like the stage
of the process where you like push.
Testing to the left, or you push testing overall to the
early stage of the pipeline.
And the idea of this approach is to actually identify and also resolve
bugs and in the early stage of the development process before it gets all
the way to the close to production.
However, in terms of, cloud based application, we're talking about,
or in terms of this particular topic or presentation, we're focusing
on shifting cloud testing left.
And what this means is moving cloud based testing to the early stage of
development, allowing developers to validate, their code in real cloud against
real cloud environments without waiting all the way to, For the staging or for
the deployment at the end of the day.
And the goal is to actually improve software, quality, and also reduce
the time that developers spend in resolving issues and over time.
The, entire, Over time, improve the entire software development workflow
for, and also like developers experience on the long run.
And the key principles behind this is first, you want to be able to identify
issues very early into the development.
You want to be able to catch the bugs, at the initial phase, like
before waiting to the end, which saves a lot of time for the developers.
You also want to ensure continuous feedback loop, because instead of having
to go over and over the development, loop, which like, slows all the time
of reiteration, you want to regularly get feedback as fast as possible so
that you can, move on to the next phase of the development workflow.
You also want to produce high quality applications that have.
Fewer code when it gets to testing and you also want to be able to reduce the,
instances of like delay in application delivery instead of spending hours writing
and fix rewriting, actually rewriting and fixing the code over and over again.
You want to just test it, in the local states where the developers are more
familiar with, the tools and gets, the.
issues out of the way as fast as possible and then push finally
to, for deployment and so on.
And so the benefits of actually, shifting cloud testing left are obviously it
comes to so many advantages we've looked at some of the challenges of this and
the benefits of this, not only to the developers, but to the organization as
a whole, is that you're able to detect the bugs at an early stage, which makes
it very easy to like, you Identify the issues, where need be not just issues
that would appear in the local states, but the issues that would appear in the
production states, identify them as early as possible in the stage of development.
When developers are still writing the code, you want to be able to increase
the efficiency of, of course, any, bug detection in the environments that,
especially when they are like closely.
The environment that closely matches like the production leads to a smoother
development workflow, the another benefit is that you want to save a lot of cost
because instead of paying so much for cloud, cloud resources for like your
number of developers and all of that, you could actually work with a shared
development environment where every single developer has access to, this without
paying for like individual cloud costs.
And you also want to like, fastened development because you want to
quickly iterate within these tight feedback loops without waiting for
the build push, test, circle that developers would normally deal with.
In traditional development workflows.
And you're also thinking about the development.
the developers will be on the long run because this gives developers
the confidence that they can actually write code and then run like they're
testing more frequently without having to go through so much, chaos and all
of the hassle that the traditional development workflows introduced.
Which is why a development to MirrorD becomes very much of a high need.
And if you're wondering what MirrorD actually is, it's simple.
MirrorD is a development tool that connects like your local processes to.
A remote cluster.
So you can actually test and debug as if you're running your code in production
without actually deploying it first.
Now with Miradee, developers can actually work locally, but on a
remote shared development environment.
And every developer has like access to the cluster services
as if they're running it locally.
They can actually reroute the clusters traffic.
local services and all of this while keeping convenience, like the
local development environment that they're very much familiar with,
as well as debugging tools that they're also like very familiar with.
So it becomes a lot easier to test your code on a cloud environment
that, let's say staging, for instance, without actually having to go
through the hassle of, dockerization, continuous integration and deployment
without, also disrupting like the environment by deploying untested code.
And the good thing is Meredith comes like with the, With your ID is that you're
very familiar with VS code or IntelliJ.
And it's also available in like in the CLI with just a simple command.
You have MirrorD running in a matter of seconds.
And, to break it down, when you're running like a development workflow
with MirrorD, instead of going back and forth, like we looked at
earlier, you write your code base and confirm that it's actually working.
works well, not just works well locally, but then because you actually tested it
against cloud based conditions at the initial or like at the development phase,
you can go on and create a pull request.
And obviously, because this has been tested and you've made sure and confirmed
that the application is working well, and it should work well, even if it
goes through all of the pipelines.
You open your pull request and then every other test that is expected passes through
without any struggles and the application is finally deployed to staging and the end
to end testing also goes on smoothly and finally you can deploy your application
to, production without any challenges.
So what happens when you run MirrorD is that MirrorD runs into places in the
memory of your local process, which is the MirrorD layer, and as a port in your cloud
environment, which is the MirrorD agent.
So when you initiate MirrorD or run it in like your IDE or in the CLI, MirrorD, it
starts like the port called the MirrorD agent, which operates like within the
same network, namespace as the port that you're targeting or the specific.
remote, distant port that you're targeting.
And then this agent tends to have access to everything that your application
would need in the cloud state, like the networks, the file systems, the
environmental variables, and all of that.
And so Your local machine doesn't have to do like all of the heavy lifting and the,
managing all of this, like it would in the traditional workflows that we looked at.
Whereas on your system, the Miradi layer is going to, integrate with like
your local, your local developments, your local development, environments,
Intercepting and redirecting all low, level functions to the Miradi agent.
And what this does is allows you to interact with the resources, like
the files, databases, APIs, just like your application would, if it was
actually running, in the cloud basis.
And the end result of this is that, The remote, the remote server,
relayed back to a developer, machine.
And so instead of having to do all of the heavy, weights lifting of managing
this locally, you're able to assess the state of the application, like it would
in real time on your local machine.
While Meredith does all of the heavy lifting.
And this happens in a matter of Seconds, without even realizing it.
And at the end of the year, developers feel like they're actually running
their applications, as if it is in the cloud, enough of the rambling.
Let us actually look at how Miradee does all of this magic in the background.
So for this demo, I have a very, small microservice application or project
that I would like us to look into as even I'm working in a team that has
multiple microservices and I was.
Assigned these projects, a weather app to, the weather app, microservice rights
to the bug and, maybe, this header was no longer shooting and I need to update that.
And by the way, this weather app projects basically just calls
to an API and it, returns the weather update in different cities.
Like you can see here.
And so as you mean, I want to like, change that.
And, I don't like the color of this background cause it doesn't suit
with me, but the brand anymore.
And I don't like the text on the header cause it doesn't also align anymore.
And I want to make changes to this, right?
I'm going to, first of all, go to the, the code base.
by the way, this, application is, running on AKS cluster, right?
And so what I'm going to do is let's update this here.
KCD Acra demo project.
And, as you may also want to change that color, I'd change it to maybe purple.
It gives me like a, a purple background at the end of the day.
And if you're wondering how to actually get started with MirrorD, in case I
didn't point that out, MirrorD runs like a CLI, but it also has an extension.
And then this extension is available on VS code right here and also like on IntelliJ.
So depending on the one you actually want to go with, the speed and everything
is I'm going to just use the extension to show you how simple it can be.
I already have my extension installed, but once you install the
extension, you get like an icon at the bottom here that says Mirror D.
It's disabled right now, but if I want to use Mirror D, I have
to enable it by, turning off.
toggling on the, button, and then you can select like an active configuration.
What happens to the configuration file is, it's just a way for you to customize
what you want MiroD to do, what you, what, where you want MiroD to have access in.
Like for instance, I am targeting the weather app deployment port.
But you can choose the particular, service that you want to target, right?
Or the particular port you are targeting.
And then you can define, what you want MIRIDI to do.
by default, MIRIDI kind of mirrors the, traffic and mirrors the state of, The
cluster, but you can actually tell it to still incoming traffic and outgoing
traffic instead of mirroring it.
And then it's also like able to get me environments, the, return all of the
file systems and so many other things.
But basically it's a basic, mirror D configuration file.
There's so much more you can do with these mirror D configuration files.
So now that I have my configuration sets, the, MIRDI is enabled
and, the changes I also want to do have also been implemented.
I look at my, API dot, PY file.
This looks good.
Next thing I'm going to go to the debug and hit, start debugging.
And once you do that, MIRDI is going to take a matter of like seconds.
To initiate is going to like, get the binaries that you need and every other
thing set up in a matter of minutes.
And just like I said, Meredith has already, started getting
itself ready to, do its job.
And, you can see that it's already started the debugging process is active.
And yes, once you see that it's turned orange at the bottom here,
it's a sign that we're already, it's like already, doing its job.
And like I mentioned earlier, What happens when you like start MIRDI
is it injects like the MIRDI agent into the pod that you're targeting.
let's go and see if the MIRDI agent exists right now.
okay.
So we have the MIRDI agents right here that is already running.
It started, 56 seconds ago.
And that shows us that if we want to check the changes to this particular project.
It's going to be implemented.
Now, this is what the application in production looks like.
Now, let's try to run this same thing.
The good thing is you can actually use the, the same URL for the project.
And, it will still render the same, returns.
So now we have the version that actually says KCD Acura demo project.
And if you try to also check the weather.
It should come out in purple, which is the color we indicated, right?
So now that is exactly what you're going to get with, regardless of how complex the
project is, you're working on, this is how fast Mirodi works and the good thing is
Mirodi does not select maybe, technologies or languages to work on any technology
that you're using for your project, Mirodi is going to work regardless.
And it is just as fast as possible.
And, basically that is how Miradee works.
And another amazing thing with Miradee is right.
once you're done with this, once you're like done with this whole process, you can
actually just see that the Miradee agents will automatically terminate itself.
Once you're completed with the debugging and you disable, Miradee to automatically
disable itself and leave the cluster without Any interruptions or doing any
further damage to the cluster at all.
let's see how that goes.
While this is even going on the application in production,
nothing is affecting it.
And let me show you what I mean.
While this works, the application itself in production is not affected.
And then, so I'm going to, first of all, end this debugging
session and then disable mirror D. And, if you try to reload this,
it's going to go back to the, original state of the application
as it is in production.
Right now, that works fine.
Now, let's see if the port still, the MIRDI that, the MIRDI agent still exists.
Now the meridi agent no longer exists here because we have already ended the whole
debugging session and it is completely Terminated right now one exciting
thing or the it has meridi has a lot of exciting things, by the way But like one
exciting thing about meridi is that you don't actually need root access to use
meridi It is easy to get started with you already saw how we you can install medi
and in a matter of seconds you're already working with the whole projects, right?
And, medi is not invasive to like your remote cluster, so it just
attaches itself to the specific port.
And, once you're done with the whole debunking session,
it's destroyed immediately in a matter of 15 to 30 seconds.
Melody is already started up.
So you're not wasting a lot of time loading things or downloading anything.
You'll see, just going to like your ID is to install it.
It's all that you need to get started with Meredith and you can run
multiple processes at once, right?
Each connected to like different remote board.
you're not even limited to just maybe one specific, board at a time.
You can get all of that with Meredith.
And it also does not care how your cluster is set up at the end of the day.
So whether you have like a service mesh or you're using a VPN or any other
thing, Mirodi will still work regardless.
And like I said, we have a configuration that you can also customize things to
your project's need at the end of the day.
And so if you have any questions or you want to basically learn more
about MirrorD, we have a QR code here that I would say you should
scan this will take you like to the MirrorD website so you can find out
more about MirrorD, but then we'll, you can also like just type mirrord.
dev and you to the website if you want to check out the repository, MirrorD is open
source, so you should feel free to start a project and appreciate the developers
for doing such an awesome project.
Thank you.
You can stop me already.
If you want to contribute to me already as an open source, it's as well.
discord community is really welcoming.
We're able to support you in any ways that you can to make your
contribution experience really smooth.
So you can also check out the discord and then say hi once you do just.
Say, hi, you're joining from KC the Accra and I will, indicate that you're
welcome and give you any resources that you need to get started.
If you also just want to find out more about mirror D and, chat about how you
can use it in your development workflow as well, feel free to reach out to me
via my email or any of like my social media platforms at Anita is human.
Thank you very much.