Conf42 Cloud Native 2023 - Online

Container and Kubernetes security policy design: 10 critical best practices

Video size:

Abstract

Building a scalable Kubernetes security policy design requires that you adopt a fully cloud-native mindset that takes into account how your streamlined CI/CD process should enable security policy provisioning. In this session, we’ll cover the best practices for designing scalable security policies.

Summary

  • Presentation is divided into seven sections. Talks about container and Kubernetes security policy design and ten critical best practices. To bring all those topics to life, I will demonstrate these concepts in action.
  • Project Calico offers a free and open source networking and network security solution for containers, virtual machines and native host based workloads. Project Calico will be attending Kubecon EU 2023, which is happening from April 18 to 21 in Amsterdam.
  • Application modernization is the process of updating or transforming existing legacy applications. It aims to make them more efficient, scalable and cloud native. This approach helps businesses to meet the demands of modern customers and remain competitive. But embarking on this journey is not an easy task.
  • Kubernetes has built in resource, which is called network policy. Policy resources can be attached to namespaces and pods to block or permit a traffic flow. Monitoring is a critical component of container security. By monitoring both the infrastructure and application, it teams can detect any potential issues before they become critical.
  • Next I'm going to compile a simple hello world application into two container images with different base image layers. Scan the end result with the Tigera image scanner to spot any vulnerabilities that might be inside the container image.
  • We can change this by installing Tigera operator and Calico on our cluster. Next stop is network segmentation. Create a couple of security policy resources to secure communication between Calico and Kubernetes components.
  • Calico components are capable of exposing Prometheus metrics, however, this is disabled by the default configuration. For infrastructure monitoring, I'm going to use the node exporter helm chart that can gather vital information from my vms. Use the top link in this page to create the demo and check each step by yourself.

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Let's start a session and talk about container and Kubernetes security policy design and ten critical best practices. This presentation is divided into seven sections. I'll start by a brief overview of Project Calico and what it is that we are doing at Tigera to help you in your cloud native journey. Then I will switch top application modernization and its relevance in the context of container. Then I will delve into the topic of container security to highlight some key best practices to secure your containerized applications. Next I will discuss network segmentation, which is a critical part of securing your Kubernetes cluster. To better understand the best practices part of the demo, I'm going to talk about the policy enforcement and policy resources in kubernetes. Finally, I will conclude with a section on monitoring why it is necessary to have it and how it can help you top, detect and respond top security threats in real time. To bring all those topics to life, I will wrap up with a demo where I do my best to demonstrate these concepts in action. If you're new to kubernetes or container networking, don't worry, I got you covered. At the end of this presentation, you will find a link and a QR code to all the resources that you will need to recreate the demo in your own lab environment and brush up on any concepts that you might be unfamiliar with. This way, you can redo the demo part as many times as you like and gain a deeper understanding to the topics that it covers. So let's get started with Calico. Project Calico offers a pure layer three approach to virtual networking and security for highly scalable data centers. We offer Calico a free and open source networking and network security solution for containers, virtual machines and native host based workloads. Calico supports multiple architectures and platforms such as X 86 and ARM 64, so you can basically install it on any environment. Today, Calico offers four pluggable data planes that can be switched on depending on your needs and environment. A standard data plane is based on iptables that provides fast networking, security and compatibility for all environments. Calico EVPF Data plane is another data plane created by Tigera, which uses the power of EVPF to provide blazing fast networking and security for your environment. EVPf data plane offers capabilities such as complete Kubeproxy replacement source ip preservation and DSR. If you're running a hybrid environment, you can use Calico for Windows, which is based on Microsoft HNS and can deliver networking and security to your Windows nodes. VPP is the newest data plane for Calico. Currently in its beta phase, which accelerates the networking experience by utilizing the power of user based programming. In fact, EVPF and HNs are some of the foundational technologies that provides networking security, observability, image assurance, and runtime threat defense in our enterprise solutions at Tigera. Project Calico is an inclusive, active community about cloud networking and security. Feel free to join our community using these social networking handles and drive the conversation where you see a need for a change or seek help for your Calico adventure from developers who are actively working on the project. We're also excited to announce that project Calico will be attending Kubecon EU 2023, which is happening from April 18 to 21 in Amsterdam. So come meet us at our booth, Booth 28. We'll be there from 10:30 a.m. Onwards to answer your questions or help you with your cloud journey. All right, so application modernization is the process of updating or transforming existing legacy applications to make them more efficient, scalable and cloud native. This approach helps businesses to meet the demands of modern customers and remain competitive. Modernization typically involves the replacement of older systems with new ones, the integration of new applications, and the development of new processes and workflows. Some common approaches to application modernization includes refactoring, code replatforming, rebuilding the infrastructure. Now you might be thinking, what is a motivation behind it? Well, legacy applications are often complex, outdated and difficult to maintain. They may have been built on older technology platforms that are no longer supported or require manual processes that consume significant resources and manpower. As a result, they may have limited scalability, poor performance, and present security risk. To remain competitive, businesses need to embrace modernization, accelerate innovation or agile workflows. Optimizing costs and improve security are some of the reasons the ideal design for the cloud is break up everything into microservices that can communicate with each other and your storage of choice. This distribution allows you to go beyond what was possible with the old technologies in terms of serviceability, scalability and reliability. While application modernization offers many benefits, embarking on this journey is not an easy task. It requires a significant investment in people, process and technology to achieve the desired business outcomes. The right foundation must be established from the beginning. Top avoid this high cost of rearchitecture, which can be a major roadblock in achieving success. One of the key benefits of modernization is ability to package applications and their dependencies into port, table and lightweight container images that can be easily shared and distributed. By packaging your application and its dependencies in a container image, you can ensure that your application is isolated from the host system and other applications. This can help to reduce the risk of security breaches. When creating a container image, it is important to include only unnecessary components. This means remove unnecessary files, libraries, and dependencies that are not required for your application to run. And finally, when you package your application, scan them for vulnerabilities. There are several tools available that can help you to scan your container images for vulnerabilities and identify any potential security issues. And in the demo part of this presentation, I will show you how to use Tigera scanner to scan your images for vulnerabilities. The choice between using a public or private image registry depends on an organization's specific needs and requirements. Public image registries can be a convenient and cost effective option for smaller projects or open source software, while private image registries may be necessary for larger enterprises or those with specific security or regulatory requirements. For example, government agencies and banks prefer private images repositories for several reasons related to security compliance and data protection, and also since government agencies and banks deal with sensitive information that must be kept confidential. Private image repositories allow these organizations to keep their software, assets and intellectual properties within their own secure infrastructure, reducing the risk of leaks or unauthorized use. After your container is created and stored in a registry of choice, you need to provide isolation on a networking level for it. Kubernetes network segmentation is the process of dividing Kubernetes clusters network into smaller, isolated segments. This can be achieved by using network policies to restrict communication between different pods, services, or namespaces within the cluster. Network segmentation is important for security purposes as it helps to limit the potential impact of security breach by segmenting the network, an attacker who gains access to one part of the network will find it more difficult to move laterally and access other parts of the cluster. If you're using capable CNI such as Calico, you can also add RBAC roles to your segmentation process. All right, so let's dig deeper to the realm of security and networking. Kubernetes has built in resource, which is called Kubernetes network policy that can shape the security posture of your cloud native environment. However, like networking, Kubernetes doesn't enforce these policies on its own, and it delegates the responsibility to the CNI plugins. So it is vital to use a CNI that offers such capability to secure your environment. These policy resources can be attached to namespaces and pods to block or permit a traffic flow. While these policy resources are a great tool to secure your cluster, they have some limitations that might be a problem down the road. These policy resources don't have explicit action attributes, which can cause a bit of problem in massive clusters. You cannot also write node specific policies, and you don't have policies that can affect the cluster as a whole. Similar to Kubernetes, your CNI of choice might offer security resources. For example, Calico offers two set of security resources that can be used alone or with Kubernetes policy resources to further lock down your cluster and bring security to your environment. These two resources are Calico network policy, which is a security resource that can be applied in namespaces, and global network policy, which can be applied to the cluster as a hope. On top of that, Calico provides a host endpoint policy resource that can be used to secure non namespace resources such as host processes and host network cards. Monitoring is a critical component of container security because it allows administrators to detect and respond to security threats in real time. Container applications are highly dynamic and distributed with many different components running across multiple nodes, making them more difficult to monitor and secure. By monitoring containerized applications, administrators can gain visibility into behavior of these applications, identify potential security issues, and take action to mitigate them. All right, so let's start by infrastructure monitoring. Container rely on the underlying infrastructure such as the host OS, network, and storage to function properly. Infrastructure monitoring tools can help detect any issue or vulnerabilities in the infrastructure, such as resource utilization, network latency, and storage capabilities that could impact container performance and security. Next, application monitoring containers are used to deploy and run applications, and application monitoring tools can help detect any issues or vulnerabilities in the application code or dependencies, such as memory leaks, errors, and crashes that could impact container performance and security. By monitoring both the infrastructure and application, it teams can gain a comprehensive understanding of container environment and detect any potential issues before they become critical. All right, it's time for the demo. All right, so let's start the demo. By using multipass to create the infrastructure environment, I'm going to instruct multipass to create three vms, one as a control plane and the other two as worker nodes. After completion, I'm going to use multipass transfer command to move the kubeconfig file from multipass VM to my computer and use it to access my cluster. In order to use the Kubeconfig file, I'm going to create an environment variable called Kubeconfig with the path to the transferred file and replace the localhost IP address with the multipass instance IP. Now I can use Kubectl to access my cluster and change configurations. Next I'm going to compile a simple hello world application into two container images with different base image layers and scan the end result with the Tigera image scanner to spot any vulnerabilities that might be inside the container image. Now that the compilation is done, let's take a closer look at the docker file. As you can see, there is nothing special in here and the only notable thing is the use of Ubuntu latest as the base image. Now let's create a container with the same application and a different base image. As you can see, this time I'm using the scratch layer to package my application. All right, before scanning, let's check if the image size is different. Seems like the scratch image is significantly smaller, but the image size is not the only difference here. Let's go ahead and scan the Ubuntu based container with the Tigera images scanner. As you can see, there are two vulnerabilities in a hello word application that I just packaged. Let's scan the other one, and as you can see, there are no vulnerabilities because the scratch image doesn't include any libraries other than the ones that we specifically say. All right, let's get back to our Kubernetes cluster. At the moment my cluster is not ready since there is no CNI installed on it. We can change this by installing Tigera operator and Calico on our cluster. If you recall, during the presentation I talked about public and private registries. Most manifests on the Internet are shipped with a public registry path to offer easier accessibility for everyone. For example, if we examine the tiger operator manifest, it shows the quie IO public repository as the image storage location. So for the next part, I'm going to configure another multi pass vm as a private repository and push all other components of calico into it so we can install everything from our private registry. And to make my life a little bit easier, I'm going to extract the private registry IP address and the desired version of Calico and save them in two environment variables. To prepare for the next part of this demo. Now let's pull Calico Typho, one of the components of Calico, into our local docker and tag it for the private registry and push it into our private registry. Since my private registry is configured with a self signed certificate, I need to explicitly allow it inside my docker settings. To do so, I'm going to head to the settings in my docker desktop and select docker engine and add the IP address of my private registry as an insecure registry here and apply and restart the docker daemon after I'm done. This will take some seconds, but after it's done, we can easily push every image that we want into our private registry. All right, let's go ahead and check again. Perfect. The image is pushing to the private registry without any problem. All right, now let's every other component of Calico into our private registry as well. Changing the image location that is used for each pod can be happened in different ways. For example, Calico installation resource, which is used to instruct the operator on how to install Calico, offers an attribute called registry which can change the default image registry that is used for the installation process. Let's use this value and install the remaining components of calico from our private registry. Other than the installation resource, Tigera operator offers a Tigera status command which could be used to observe the Calico installation process. For example, here I've chained the Tigera status command with a cube kettle weight to form an interactive weight that will end when the calico installation is done. Before going any further, let's verify that calico components were actually pulled from the private registry by issuing a describe on one of the calico components. Excellent. It seems like private repository was the registry that provided the image. Next stop is network segmentation. First, let's create a namespace for our monitoring solution, which we will deploy later on this cluster. After that we need to create a cluster role resource with the expected privileges that our monitoring user should hold, which will be used by the monitoring program. Next, we need a user resource in our cluster to be associated with the cluster role, and after that we need a cluster role binding to glue together the cluster role and the service account. Now let's apply the resources for these cluster roles and service accounts to actually create the resources in the cluster. Let's carry on by creating a couple of security policy resources to segment and secure communication between Calico and Kubernetes components. This first policy will permit host network containers to communicate with the local host IP address, as it is suggested by this picture. Calico node and Calico typhoid pods which are located in the Calico system namespace, are now permitted by the previous policies that we applied to communicate with the host OS local host IP. Next policy is the famous deny app policy, which you have undoubtedly seen as part of our documentation and free certification programs. As it's illustrated by this diagram, DenyApp policy will block all communications that are happening by namespace resources, except the ones that are distant for the core DNS pods to query DNS information. To make a long story short, I'm going to stop narrating what these policies will do and just apply them to my cluster. But if you'd like to know more about these policies, how to write them, and what they actually affect, check out the best practices for securing a Kubernetes environment folder in the GitHub repository of this demo. The link will come at the end of this presentation. All right, let's go ahead and add infrastructure and application monitoring to this cluster as the final step. First, let's enable application monitoring for Calico. Calico components are capable of exposing Prometheus metrics, however, this is disabled by the default configuration. Enabling monitoring is pretty easy. All that is required is changing two values from calico configurations. After each enablement. We have to create service so it can act as a load balancer for metric collector. For infrastructure monitoring, I'm going to use the node exporter helm chart that can gather vital information from my vms and expose them in the Prometheus format. Next, I'm going to pull the node exporter image from the public registry and push it to the lab private registry. Now that the image is set, I can modify the helm installation to use the private registry for building the node exporter image. Now that the both monitoring solutions are in place, I can use Prometheus web UI to validate the procedure. That's it. There is a lot more to discuss about container security and security in general. However, that would require you to sit through more of my boring explanations. So, as promised, you can use the top link in this page to create the demo and check each step by yourself on your own time. If something goes wrong or you got any suggestions, don't be shy to contact me. I'm reachable at dsocial places and calico users like so that's it for my presentation. I hope you have enjoyed it and I'd like to thank you for viewing.
...

Reza Ramezanpour

Developer Advocate @ Tigera

Reza Ramezanpour's LinkedIn account Reza Ramezanpour's twitter account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways