Transcript
            
            
              This transcript was autogenerated. To make changes, submit a PR.
            
            
            
            
              Hello, I am Federica Chufon. I'm a specialist solutions architect for
            
            
            
              container technologies at Amazon Web Services.
            
            
            
              Today I'm going to talk about how the Kubernetes
            
            
            
              ecosystem has evolved to answer to more complex application
            
            
            
              patterns from a networking standpoint.
            
            
            
              So let's dive right into it.
            
            
            
              Let's start with the easiest pattern,
            
            
            
              which is the monoith how do we handle
            
            
            
              internal calls and client traffic within the monolith?
            
            
            
              Well, applications are on the same physical
            
            
            
              and virtual machine into the monolith. Therefore we
            
            
            
              use loopback interfaces to grant
            
            
            
              communication. And how do we handle
            
            
            
              client traffic? Well, if you want to be fancies
            
            
            
              you could use load balancer or a proxy.
            
            
            
              But this pattern, the monolithic one, is not
            
            
            
              used anymore because of a number of reasons, as reducing
            
            
            
              dependencies between applications,
            
            
            
              enabling developer flexibility,
            
            
            
              reducing blast radius and decreased time to market.
            
            
            
              So we broke down, or a
            
            
            
              pattern that is very common is to break down these monoes into
            
            
            
              microservices. In particular,
            
            
            
              you can containerize those and manage them
            
            
            
              with, for example, kubernetes.
            
            
            
              So in the Kubernetes context, how do we handle
            
            
            
              service communication? Well, when you containerize
            
            
            
              your application and put this into a Kubernetes cluster,
            
            
            
              your application will talk thanks to the container
            
            
            
              network interface. Some examples include
            
            
            
              Amazon VPC, CNI,
            
            
            
              Cilium and Calico CNIs.
            
            
            
              Well, client traffic can also be handled by load balancers.
            
            
            
              But when we are talking about application traffic,
            
            
            
              Kubernetes ecosystem needs
            
            
            
              something more and this is actually the ingress
            
            
            
              resource that needs deployed into your cluster
            
            
            
              and an application which is the ingress controller which
            
            
            
              picks up the creation, update or
            
            
            
              deletion of ingress. Kubernetes objects and automatically
            
            
            
              creates updates or deletes load balancers.
            
            
            
              Well, of course, breaking down your application
            
            
            
              into microservices is only the first step.
            
            
            
              The pattern that we see with our medium large scale customers
            
            
            
              is to further break down applications into different
            
            
            
              clusters or different areas of the cloud
            
            
            
              to gain flexibility and for separation of
            
            
            
              duties and security.
            
            
            
              Well, so let's recap this a little bit. You have multiple
            
            
            
              microservices which are spread across different clusters,
            
            
            
              different areas of the cloud, and those are written in
            
            
            
              different programming languages usually. So in
            
            
            
              this context, how do we handle service to service communication and
            
            
            
              client traffic? Well, we need to take
            
            
            
              into account some challenges.
            
            
            
              So first of all, how do
            
            
            
              we manage network connectivity and traffic routing,
            
            
            
              and how do we obtain visibility into service
            
            
            
              to service communication? Let's say, how do we manage the
            
            
            
              observability of my systems and how
            
            
            
              do I ensure trust by automatic
            
            
            
              security and compliance in my systems
            
            
            
              and applications also tied
            
            
            
              to the modernization of our application patterns
            
            
            
              and architectures. We need also to modernize our
            
            
            
              organization and really create a
            
            
            
              culture of innovation by organizing well
            
            
            
              our teams into small DevOp teams.
            
            
            
              So those objectives needs to be
            
            
            
              taken into account or were taken into account
            
            
            
              from the Kubernetes ecosystem when deploying more
            
            
            
              advanced offerings and answers to
            
            
            
              application networking in Kubernetes let
            
            
            
              me introduce a couple of concepts from Amazon Web
            
            
            
              services. The first one is Amazon eks elastic
            
            
            
              Kubernetes service. This is a managed Kubernetes cluster
            
            
            
              from AWS in which we manage the control plane of
            
            
            
              a Kubernetes cluster. In the slide you can also see Amazon
            
            
            
              VPC, Amazon virtual private cloud. It's basically
            
            
            
              a chunk of the AWS Amazon network
            
            
            
              that we give to you and that you manage completely.
            
            
            
              Resources like Amazon eks live in an Amazon VPC.
            
            
            
              So again, how do we set up and manage the
            
            
            
              ingress and service to service communications
            
            
            
              while guaranteeing agility, control,
            
            
            
              visibility and security?
            
            
            
              Well, you could use again load balancers,
            
            
            
              but planning for scale, you don't want really
            
            
            
              to provision and manage a load balancer per service.
            
            
            
              There are techniques with ingress controllers
            
            
            
              that enable you to well share
            
            
            
              load balancer across multiple applications,
            
            
            
              multiple backends, but still when your application
            
            
            
              scales massively or when you have stricter
            
            
            
              requirements, you may need to adapt.
            
            
            
              Let's recap. We have seen how networking
            
            
            
              is handled in monolithic applications,
            
            
            
              how Kubernetes answers to containerization
            
            
            
              with CNIs and ingress controllers. So let's
            
            
            
              talk about now how microservices
            
            
            
              high handed by the Kubernetes ecosystem
            
            
            
              and how the networking offering has
            
            
            
              evolved to answer to the problems of having a distributed
            
            
            
              system. Well,
            
            
            
              Kubernetes project said,
            
            
            
              okay, proxies are really useful
            
            
            
              to manage networking in general. Why don't
            
            
            
              we use them within a Kubernetes cluster as well?
            
            
            
              In fact, there is a pattern which is called
            
            
            
              to put this proxy as sidecar within
            
            
            
              the same pod, sidecar to my container
            
            
            
              that is next to the container which handles the application
            
            
            
              and put these in each pod that
            
            
            
              I have in my cluster. This pattern
            
            
            
              is known as cycle proxy pattern
            
            
            
              and it is basically a way to decouple the
            
            
            
              networking and the application layer.
            
            
            
              All the traffic that needs to go into
            
            
            
              the application pod or container or
            
            
            
              generates from the application container passes through the
            
            
            
              proxy. This enabled me
            
            
            
              some nebulas to have,
            
            
            
              let's say, more flexibility between
            
            
            
              our applications teams. Let's say that
            
            
            
              we have a team that manages a front end
            
            
            
              and that needs to talk with a team that manages
            
            
            
              a back end, when the back end team releases a new version,
            
            
            
              we need to actually change and point
            
            
            
              our application code in the front end and point to
            
            
            
              the new backend. Well, this is
            
            
            
              not needed anymore. If we are using a proxy
            
            
            
              AWS sidecar, because we can actually use
            
            
            
              the proxy to point to the new application.
            
            
            
              So inconsistencies are minimized.
            
            
            
              We are separating, let's say business
            
            
            
              logic from operation and also install
            
            
            
              operation of installs, upgrades, et cetera are a little bit
            
            
            
              more easier. And well,
            
            
            
              again, one of our objectives was
            
            
            
              to make our application observable.
            
            
            
              Now we have a unified layer,
            
            
            
              one of the proxy that manage our service
            
            
            
              to service communication and gives logs which
            
            
            
              are in the same programming language and in the
            
            
            
              same way because it's actually the same proxy next to
            
            
            
              each container of our application takes out a layer
            
            
            
              of observability that is very important
            
            
            
              when we are managing sidecar or application at
            
            
            
              scale. But again we are talking about scale.
            
            
            
              How do we manage all of this sidecar? Well,
            
            
            
              much like in a Kubernetes cluster, we manage
            
            
            
              our applications with a control
            
            
            
              plane and well think
            
            
            
              about having a control plane also for our sidecar
            
            
            
              proxies and the control plane plus the
            
            
            
              data plane which are the sitecar proxy themselves,
            
            
            
              it is a service mesh. On the left side
            
            
            
              of the slide you can see some examples of service meshes
            
            
            
              which are commonly used by our customers.
            
            
            
              Let's dive a little bit deeper into use cases. Well,
            
            
            
              obviously traffic routing is something that we want to manage with
            
            
            
              a service mesh, for example routing
            
            
            
              from a version one to a version two. But also
            
            
            
              there are some advanced features for routing like for example
            
            
            
              prefix query parameters or specifying
            
            
            
              HTTP headers.
            
            
            
              Let's say that we want to protect also our application from
            
            
            
              large spikes in traffic and we want to assure
            
            
            
              a good level of services. Well, the service mesh
            
            
            
              enables me to implement automated retries, secret breaking
            
            
            
              and rate limiting into our applications.
            
            
            
              Security is also one of our objective and you can enforce
            
            
            
              that your application are able or not to talk either within
            
            
            
              themselves or among theirselves
            
            
            
              or to third party APIs.
            
            
            
              You can also enforce that the communication are encrypted and
            
            
            
              so reinforce Tls, for example at
            
            
            
              the reverse proxy level, not at application
            
            
            
              level. This can also integrate with third
            
            
            
              party services to generate and renew certificates.
            
            
            
              Not only Tls, but we can also
            
            
            
              enforce service to service authentication with
            
            
            
              mutual Tls.
            
            
            
              Another use case is observability. We talked
            
            
            
              about it a little bit, but basically the
            
            
            
              layer of observability that I gain is that I can
            
            
            
              also implement tracing within the proxy to understand upstream
            
            
            
              and downstream dependencies and also I can understand where
            
            
            
              the bottlenecks are and
            
            
            
              identify patterns and understand service
            
            
            
              performance.
            
            
            
              Another common use case is multicluster.
            
            
            
              So connecting together different clusters
            
            
            
              which are placed maybe also in different vpcs
            
            
            
              or different accounts and the
            
            
            
              communication can be granted by implemented
            
            
            
              in the correct way. Obviously a service mesh one
            
            
            
              common proxy that is used within
            
            
            
              meshes is envoy, which is basically highly
            
            
            
              performant cloud native proxies. It's very efficient,
            
            
            
              it has a really small footprint and
            
            
            
              it handles well client side
            
            
            
              load balancing, retries, timeouts, rate limiting
            
            
            
              enables you to have observability layer seven traffic
            
            
            
              and supports different for example HTTP
            
            
            
              HP, two CRP and TCP protocols
            
            
            
              and also it has rich
            
            
            
              and robust APIs for configuration.
            
            
            
              One service mesh that uses ongoing proxies
            
            
            
              is istio. You can install ISIO into your cluster
            
            
            
              using istioctl or istio operator,
            
            
            
              create and manage the components like for example
            
            
            
              mesh virtual service, virtual gateways, et cetera,
            
            
            
              through istioctl and Kubernetes APIs.
            
            
            
              There are different supported platforms on
            
            
            
              Amazon Web services. You can configure and run
            
            
            
              your mesh in Amazon EKS,
            
            
            
              Amazon EKS Fargate self, also Kubernetes
            
            
            
              on Amazon EC two and also it integrates
            
            
            
              with Kali. Kali is a console for istio service mesh
            
            
            
              and you can use it configure, visualize,
            
            
            
              validate and troubleshoot your mesh.
            
            
            
              Let's see how it is istio deploying to our
            
            
            
              cluster. We have an is cluster. We want service
            
            
            
              a and service b to be able to communicate. So when you
            
            
            
              deploy Istio, we are actually deploying the data plane which are
            
            
            
              again the proxies throughout the service
            
            
            
              flows and the control
            
            
            
              plane. The control plane is well a set of
            
            
            
              pods which is ECP and it's not managed
            
            
            
              by AWS. It lives into the clusters as
            
            
            
              again AWS a resource pod. So service
            
            
            
              meshes is really the answer to a lot of our
            
            
            
              challenges, but well,
            
            
            
              still some remain. We have
            
            
            
              a lot of now sidecar proxies that we have
            
            
            
              to deploy and maintain at a scale. This can be
            
            
            
              challenging. It only integrates
            
            
            
              with container based workloads. Let's say we are
            
            
            
              using say a serverless service like AWS lambda
            
            
            
              or Amazon EC two. This is not
            
            
            
              really supported. And also before even thinking
            
            
            
              about deploying a mesh, we need to have intervpc
            
            
            
              networking that we need to actually
            
            
            
              implement to grant connectivity across vpcs,
            
            
            
              for example, and to enforce security.
            
            
            
              So how can we answer
            
            
            
              to these challenges? Well, let me introduce
            
            
            
              the Kubernetes gateway API. The Kubernetes
            
            
            
              Gateway API is a sig network project
            
            
            
              that took the lesser learned from the ingress project
            
            
            
              and implemented a new way of doing
            
            
            
              networking into Kubernetes. We have different
            
            
            
              objects that belong to the Kubernetes gateway API.
            
            
            
              The first of it is gateway class. The gateway class
            
            
            
              enables me to formalize basically the type of load
            
            
            
              balancing implementation that I can use. Let's say
            
            
            
              if it were ingress, for example, it would be a load balancer.
            
            
            
              The gateway actually specifies
            
            
            
              a point where the traffic can be translated
            
            
            
              into the Kubernetes services. So how do I translate traffic
            
            
            
              that is coming from outside the class, inside the
            
            
            
              class and where this is HTTP
            
            
            
              routes. Those are rules for mapping the request
            
            
            
              from a gateway to Kubernetes
            
            
            
              services, and the services are
            
            
            
              basically the targets of our HTTP routes.
            
            
            
              A nice feature to the Kubernetes Gateway API is
            
            
            
              that it is role oriented and
            
            
            
              we have finally a mapping between the Kubernetes
            
            
            
              objects that we have in our cluster and a role
            
            
            
              within our organization. There are different
            
            
            
              implementations, like for example for the ingress,
            
            
            
              we have different ingress controllers,
            
            
            
              so we have different controllers for the Kubernetes
            
            
            
              gateway API. You cannot recognize different icons
            
            
            
              that we have already seen in our presentation, like for example istio
            
            
            
              njs haproxy and a new one.
            
            
            
              This is the Amazon VPC lattice icon.
            
            
            
              Amazon VPC lattice is a networking service that we have
            
            
            
              in preview and announced in our
            
            
            
              previous Rainband Rainband 2022.
            
            
            
              And basically this
            
            
            
              implements a layer seven fabric
            
            
            
              layer within the application VPC
            
            
            
              fabric. So to say we
            
            
            
              can enable service to service communication without sidecar
            
            
            
              proxy, which are required, and much like
            
            
            
              a service mesh, but without the
            
            
            
              need of deploying sidecars. It works
            
            
            
              across all the compute options, EC two KIAS
            
            
            
              cs lambda, and enables
            
            
            
              you to deploy complex
            
            
            
              security, let's say architectures, because actually
            
            
            
              it's really easy to grant or implement the
            
            
            
              networking also across vpcs and accounts where before you needed,
            
            
            
              for example to deploy transit gateway pin connections,
            
            
            
              et cetera.
            
            
            
              Let me give you an overview of Amazon VPC lattice
            
            
            
              components. We have a service network
            
            
            
              which is basically a logical boundary that you can
            
            
            
              define across vpcs and accounts,
            
            
            
              and it is used to apply common access and observability
            
            
            
              policies. Then we have service.
            
            
            
              The service is a unit of applications and it extends across all
            
            
            
              the compute instances, containers,
            
            
            
              serverless, et cetera.
            
            
            
              We have then the service directory which is a centralized
            
            
            
              register for the services that are
            
            
            
              registered into Amazon VPC lattice.
            
            
            
              And then we have security policies which
            
            
            
              are identity access management, declarative policies
            
            
            
              to configure access, observability, traffic management
            
            
            
              and those can be applied at a service gateway
            
            
            
              and or application or network level.
            
            
            
              Amazon eks supports Amazon EPC lattice
            
            
            
              and we actually have built a controller that when we
            
            
            
              create Kubernetes gateway API object
            
            
            
              automatically, the controller works and creates updates and
            
            
            
              deletes lattice resources.
            
            
            
              Let's see a little bit deeper how the controller works.
            
            
            
              So we have for example an
            
            
            
              object like the gateway which is deployed
            
            
            
              and the gateway automatically will
            
            
            
              once applied, the controller will automatically
            
            
            
              create a service network and associate
            
            
            
              the vpcs where the EKs clusters
            
            
            
              live into the service network
            
            
            
              that is. Now we have network
            
            
            
              connectivity between the two
            
            
            
              vpcs that you can see in slide,
            
            
            
              much like as if we had configured for example a
            
            
            
              transit gateway and attached the two vpcs to
            
            
            
              it. Another thing that
            
            
            
              you need is that, well now we have connectivity
            
            
            
              networking that is set up, but we have
            
            
            
              not defined how the traffic flow. To do so we need
            
            
            
              an HTTP route resource. HTTP route
            
            
            
              will automatically once applied create
            
            
            
              services which are again the targets.
            
            
            
              Well the targets the unit of applications that are defined
            
            
            
              for Amazon VPC lattice.
            
            
            
              In this case we have defined two services.
            
            
            
              One it is local to our cluster and one which
            
            
            
              is another service which we actually
            
            
            
              imported, exported and imported from one cluster
            
            
            
              to another. That is to speak, let me
            
            
            
              grab pen. This HTTP root
            
            
            
              will be applied, for example in the first cluster as a resource,
            
            
            
              and we will have exported and imported
            
            
            
              this service into this cluster so
            
            
            
              that it is visible to be picked up from the HTTP
            
            
            
              route.
            
            
            
              Then there are other things from the HTTP
            
            
            
              route we specify the gateway
            
            
            
              which is tied to. So basically we are associating our services
            
            
            
              to the service network. And again
            
            
            
              we also specify how the traffic it is redirected
            
            
            
              from the gateway to our
            
            
            
              targets, which in this case are these two services from
            
            
            
              one from the first cluster and one from the other cluster.
            
            
            
              So you can see how now we have
            
            
            
              grant in a really easy way, easy set up
            
            
            
              cross cluster communication without even
            
            
            
              needing to set up underlying, let's say networking
            
            
            
              companies like Transit Gateway VPC, period, and then
            
            
            
              deploying a mesh and deploying and maintaining a
            
            
            
              set of sidecar proxies.
            
            
            
              Thank you. I hope this session has been useful for you
            
            
            
              understand a little bit better the ecosystem that we have within
            
            
            
              Kubernetes and in particular within the answer or
            
            
            
              integration from Amazon Web services to the Kubernetes
            
            
            
              ecosystem, in particular the Kubernetes gateway API.