Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello. So I am proxy Humre and I am currently independent
developer arocad. So currently also recognized as
a web developer expert for the Google cloud. So today we are
going to talk about what sidecarless service mesh mesh and
what is ambient mesh, what is istio and what
do we do. So let's talk about who
am my. So I'm developer advocate, I have been recognized as
JD, I run my own podcast which is named
as a cloud native talks. It is for the people who are around
cloud native DevOps and want to learn about the same.
I sometimes speak about developer relations also.
So I have been part of various committees like keep up
DevOps communication, developer relations committee. I run case study
and lot of communities. So if you want to connect
me, if you want ask me doubt around the same, you can reach
out to my sandals. So let's
move forward and as you can see, so common
questions everyone has having like why actually service
mesh is needed and what is actually the theme.
So there are some challenges with the microservices. So as
you can see, if you have multiple services,
how you will interact with the same, you want to
work on the services, you want to pass the
connection from one end to another, from one endpoint to another
endpoint. So how you will make that secure
then there is lot of problems like resiliency,
timeouts and communication failures. So this
is some of the topics where I think service
mesh is needed. But yeah, you can do
it by give you a traditional method also.
So what actually is the problem? So it
is really complex and lot of things are there, right?
And how you will control the traffic, it doesn't happen. These things
doesn't happen in the traditional approach. So that's why service mesh
was introduced. Service mesh is nothing but one programmable framework which
allows you to observe the things, observe the integrations and secure
your transport of packets from one endpoint to
another. Then it connects the various microservices.
So you have different, different microservices across different
different jobs applications in your cluster or
it can be in your application. So how you will do that
thing. So that's why service mesh is there. So before
service mesh what happens? So there was like let's take example, you have
service a and service b which is nothing, but you
have any web page, right? You want to
transfer some traffic
from one service to another service. So you
have to go by the traditional approach and you will directly do the same.
But developers have to work on this thing in
a manner like they have to work for the transaction of
the services and traffic, everything they had to do,
but they don't love that. Right? Before service mesh it was like
they are working on the code as well as they are maintaining the business logic
for it. So it was really not feasible,
right? So that's why service mesh was introduced.
Like operations team, you do this thing and we will do
this thing. So thing is like now developer advocate want to focus,
don't need to focus on what actually is a business logic
on your code. They just write their code and other things
will be managed by the service mesh. Service mesh will manage everything.
Like whatever resources you want to add, whatever traffic
you want to manage or something. Then everything will be done
by the service mesh. So there are some end goals
for the service mesh, right? Like service discovery
load balancing. So service discovery load
balancing which is like you have,
you have to scale lot of your application through different, different servers and
something our service machine will help you
that. So you will say kernels is the same but it is completely different.
Then there is a secure service to service communication. Of course
there is mutual TLS connection, then there is a traffic control,
shaping, shifting. So whatever you transfer from one
packet, one service to another, traffic will be managed accordingly.
You can put new policies, intentions based access control.
Then you have the traffic metric collection, you can observe the things,
observability is present as well as you
can make your business resilient by putting lot of
resilient process, right? You generally
have seen that in the incident management and kind of things, right?
Then there is an API programmable interface, which is nothing, but you
have the APIs to connect all of the gateways,
ingress and lot of things. So there are some platform goals
with the same service mesh. As you can see at the
application level routing there is like filtering transformation
which you talked about. Then observability side, you can
collect the metrics, you can capture the traffic, you can
work on it, you want to work on the locks, you can observe things,
you can introduce the new integrations in the observability. So everything
is present by the anchor point. As well as resiliency
is one of the main factor in the business and it enables
it by traffic resiliency failover, high quality. So you
don't have to wait for being your next job
to be available. It will be done by the service mesh.
Then zero trust security, which is one of the important part, right?
So zero trust security is nothing, but you
don't just send, right? You first authenticate everything and then
you get the permission. So secure communication with the microservices
you manage the mutual TLS connection, then you manage the
certificates, certificate, rotations you do, you enforce the
proper security policies, filters. Then only it
will go forward. Any doubts, you can ask me anytime.
So as you know, what actually is the steel service mesh,
right? So istio service mesh is like, it has.
So whatever. If your application is sitting outside
of the cluster, he's sending something, right? So it will
capture by the gateway and gateway will send it to this
service a, service b. So in service, what happens? There is
sidecar proxy, right? So sidecar proxy will take
that resource, take that traffic and everything and
that will be like whatever. If application a
wants to send some information to the application b, then it
will go to the sidecar proxy and sidecar proxy will send to the sidecar proxy
of the application b. And application b will acknowledge
that. But everything is managed by the person who
is sitting outside, which is the STod control plane.
And Kubernetes does the Kubernetes send the service discovery
mesh configuration and stuff. So STOD will be the only one who will
collect the matrix, it will provide the configurations,
it will generate certificates and what are metrics are
generated will be done by the sidekicker proxy. So what
is actually the sidekicker proxy? It is nothing but your NY proxy,
right? So whatever happens service sends to
the outbound traffic is going to the proxy,
right? So that proxy is nothing but the NY. So how
istio works today? So as you can see, there is a service a
service b which we talked about. So every application pod is
deployed with a proxy which is envelope proxy, right?
And this entire thing is known as a data plane.
So the proxy is sidecar of our application. So all l
four or l seven traffic goes through the sidecar proxy.
Only l four. L seven is your layer four and layer seven capabilities,
right? So as we move forward, as you can see,
steady control pane. So as you can see,
control pane is something which puts the policies,
filters, proxies, everything.
It generates the certificates, it keeps track of your entire service mesh.
And it is the only one who will collect the metrics and
work on the things, right? So control plane is separate from your data plane,
but it is the only one who will do everything.
So we already talked about this.
As you can see, istio sidecarless model is like you have the resources
in the application, right? So that is
done by the envelope proxy, right?
But it's really complex, like large deployments of the enterprise
service mesh. As you can see, you have one
like, let's say your bar Soloio
wants to talk to the wine Soloio it have to go through the
level egress, top level egress. Then from your ingress
of the istio service to the going to the
top level egress of the foo then it going to the wine.
It is like really complex structure. It looks easier at
overview but it is really complex.
There are a lot of challenges in the cycle, proxy, overhead,
cost, operational complexity, performance is issues.
So actually not issues or something. But yeah performance is
lot of it is comparatively low,
right? So whenever you are deploying any
istio, book means like istio any application or something,
I'm just taking example of the book info sample deployment
which anyone has done till now, right? If you know the istio,
so you can see like one container is extra attached every time which
is sidecar, right? It's just increasing the lot
of things like it is increasing your cost
of cost and lot of things. So there is some challenges which
is operational complexity. As you can see you have to move through the
lot of things. As I show you then application mesh aware
which is nothing but application is not aware of your service
mesh is not aware about what is happening inside the application.
Another thing is like latency, like you have to go through the
one application to the sidekar, then sidecarless go to the another sidekar
and sidecarless goes to the application too. So that increase the latency
for sure. Then cost is of course
you have lot of statics attached to the different, different pods,
your containers, right? So of course it will take your cost
for sure. So what is the future of service mesh? So future
of service mesh always will happen around the data plane.
So as you can see data plane is where the innovation continues to happen.
It is nothing but the astray end in the data plane which is and
there is a webassembly you can push HTTP.
So first is you can also do the configurations
around the graphql and stuff, right? Another thing is like
you optimize the data plan in the future which is nothing but EBPF.
You can do the server level code, right? Kernel level
code. So introducing HTML mesh, it is new open source
contribution launched last year by collaboration with the
Soloio and Google and lot of contributors around the world.
So they are working really nice. As you can see cost reduction
happened, simplifying operations happened, then performance is improved.
It also has a rust based configuration.
So what actually it does it slice the layers so
you have like severe overlay and l seven processing layer.
So whatever you want to do, you don't have to input
all of the things according to
your nudity, you can configure this thing. So if
you want the tcp routing tunneling, it will
be provided by default. You just have to input new
policies for the l seven things. So observability without
istio sidecars is like whenever you have
to scrap any metrics or something you have to
go to the sidecars of your
you have the two costs, right? And inside your node
there is two costs and then it will have that
containers sidecar is the one who
will provide the application metrics to the prometheus,
right? So are we doing everything for
that, right? Are we working really nice on that?
So as you can see, if we use HTML mesh,
what happens? It directly provides previously
it was like istio proxy is something which provides
promises, right? So now you don't have to worry
on the same it will directly provide then what is actually the
HTML mesh. It doesn't have any sidecarless attached.
So you have like previously you have the pod per proxy,
right? Every pod has a one proxy. Now you have the proxy
for the entire node, right? So as you
can see sidecars are reduced so your cost is reduced.
Similarly what is 60 ambit mesh, it reduces
the costs directly. As I talked about in your HTO sidecar
data plane you just have the one port per.
So previously you had the one port per one proxy,
now it has like one proxy per node which is like
biggest thing you can do. One of the nice
blog written by the Soloio is which talks about stambar
mesh means what it means for your wallet. As you
can see it reduces the cpu time, it reduces the memory
and lot of things and it reduces it really vastly,
right? As you move forward. Then as you can see there
are a lot of advantages using the HTML mesh.
It reduces the latency and what
are other things, right? Then if you talk about it reduces
the cost proxy per node it is then you
have the multitenant proxy which will help you to work on
lot of things around. If you have come across
the platform engineering it is one of the really configured
nowadays and people talk a lot. Then lightweight proxy
l four implementation. Then there is a simplifying
operations as done by so what happens anytime
you are launching
new something you have to attach any resource to
your application or something you have to restart your pods
in previous versions of the istio, but now it
doesn't happen like istio embedded mesh is like data plane which
you can directly add to your configurations so you don't
have to worry on the restarting your costs.
If you are attaching it directly, you don't have to restart it,
you can directly run from that. So it makes lot
of work easier, right. Then you will not have to face
that downtime and lot of things, right.
Simplifying additional new application then simplifying application updates,
decoupling proxy and yeah l
four, l seven, right. So you can also use
it for the acceleration of the using the epps
then zero trust security what can be
done in the future. Then you can see it
provides alpha matrix and mutual tls.
Similarly you have, right statement,
right? So here you can introduce the EBPF
in a manner like it is the one who will solve
everything for you. Like you don't have to worry on the sidecar.
Also sidecar can be replaced by the VPF technology.
So there are a lot of resources to learn about HDML mesh,
you can go to these blocks and you can learn on the
same. So one of the things which I would like to discuss
about what you can make lot of things easier
which is right here. So let's say
if you want to talk from service to service
by doing some policy attachment,
right in the STMB mesh, so previously what happens
you have to attach that things to the sidekicker proxy, then application
will talk to the application. So now it doesn't have that,
it directly have the entire node has a one proxy, right?
So that proxy will talk to every applications of
things. So how beautiful it is. Like one proxy is
there which is known as Ztunnel. That ztunnel will
talk to the application a then application b application anything,
right? So it reduces your complexity,
it reduces your cost, it also do lot of things
for you. So you don't have to worry about the l four things.
Then it comes by default for the l four
capabilities, it comes by default for the neutral tls.
So another thing you can do is like l seven, right? If you
want l seven capabilities you want to work on the failure and lot
of things, right? You just have to add a policy
by doing the configuration and once you do that
and you can directly work on the things, you want to introduce something,
you can do that. So all of these are available
in one of the
workshop provided by this istio brand
mesh. You can go to the solo academy and you can learn about
same. There are a lot of resources around the and
if you want to ask any doubts you can connectment and so thank
you for connecting, I hope you learned a lot of things around the same.
So thank you.