Conf42 Cloud Native 2024 - Online

Scalable and Secure Deployments with Cloud-Native AWS CodePipeline

Video size:

Abstract

Maximize efficiency and security with Cloud-Native AWS CodePipeline solution. Streamline deployments with scalable architecture, ensuring fast, reliable, and secure software delivery in the AWS ecosystem. Embrace the future of cloud deployment beyond the traditional world of Jenkins.

Summary

  • AWS code pipeline is a fully managed, continuous integration and continuous delivery service. It offers a native out of box integration with many of AWS services. Since it's part of the AWS suite, it natively scales according to the demands of the deployment pipeline.
  • Since AWS code pipeline is a managed service, AWS handles all the updates, patches and backups. Jenkins, on the other hand, requires periodic manual updates, backups and patching. Since AWS operates on like a pay as you go model, you only pay for what you use.
  • The key components involved in a codepipeline are the conceptually similar to any other deployment tool. You have a source stage, a build stage, and the deploy stage. AWS offers multiple tools that can be plugged into this stage. Let's see how we can create a simple pipeline.
  • So for the code deploy. So we need to create an application configuration. And then we create the deployment group. The deployment group essentially kind of like configures which all EC two instances you want to deploy. And at the end of the day the code pipeline deploys to that.
  • All right, the deployment group is created for a sample application. Now we can go and create the code pipeline. The scope for this article next would be for the session, would next be like exploring more on the code deploy side and what is needed there.

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Alright everyone, let's kick off the session with a brief introduction about what AWS code pipeline is and what are the advantages of AWS codepipeline. Some of the key components involved and how a sample pipeline looks like. And then we can jump off to our demo session, we can go to the AWS console, and then we can see how we can create a simple codepipeline. First things first, let's talk a little bit about AWS codepipeline background and why it's advantages to use it and what are some of the stages involved. So the AWS code pipeline is a fully managed, continuous integration and continuous delivery service. So it's a cloud native solution, fully managed by AWS and highly configurable by users. It helps automate release deployment pipelines for fast and reliable application and infrastructure updates. Traditionally, a large number of organizations use third party tools like Jenkins for software development. So code pipeline offers native solutions to those organizations which are already heavily invested in AWS ecosystem. Coming to the advantages of AWS codepipeline, first and foremost is obviously the integration with AWS services. It offers a native out of box integration with many of AWS services like lambda, EC, two, S three and cloud formation. Now, I'm just comparing it with one of the popular third party tool Jenkins throughout this session. So if I were to contrast it with Jenkins, the main difference would be that while the integration with the cloud services is possible, it usually requires third party plugins and additional setup, potentially introducing more points of failure or compatibility issues. Coming to the next crucial aspect is obviously scalability. Since it's part of the AWS suite, it natively scales according to the demands of the deployment pipeline, so there's no need for manual intervention, which in itself kind of like ensuring consistent performance even during peak loads. Jenkins on the other side requires some adjustments, such as like adding agent nodes or reallocating resources, which is both time consuming and resource intensive since we need some dedicated personnel taking care of that. Continuing with the other advantages of AWS code pipeline, we are going to touch upon the maintenance, security, the pricing and long term value. So maintenance wise, since AWS code pipeline is a managed service, AWS handles all the updates, patches and backups. So this ensures that the latest features and security patches are always in place without us having to do much there and us having to not manually intervene there. Jenkins, on the other hand, requires periodic manual updates, backups and patching, which can kind of introduce compatibility issues or security vulnerabilities, which kind of demand regular monitoring and adjustments. Coming to the security aspect of the deployment pipelines. So one of the advantages of using AWS solution here is the comprehensive security model, which we are able to leverage the features like IAM roles, secrets, managers and other fine brained controls like service roles. So all these can be natively tied up to the code pipeline itself, which kind of ensures robust security standards along with your other tools. So on the Jenkins side, if you were to achieve a similar security level, it requires additional configurations, plugins and tools, which in itself can again sometimes introduce more vulnerabilities and unnecessary complexities and the pricing and long term value. Since AWS operates on like a pay as you go model, you only pay for what you use. So this can be cost effective, especially if we have variable workloads. On the Jenkins side, the software itself is open source, but however, maintaining the Jenkins infrastructure, accounting for all the patching and keeping it up to date itself can add up over a long period of time when you consider the time and resources invested into it. So that kind of wraps up on the key advantages of AWS code pipelines. Now we can briefly discuss into the working of code pipeline and then we can jump off to a demo session. So the key components involved in a codepipeline are the conceptually similar to any other deployment tool where you have a source stage. You either have your application artifact or some deployment assets which are going to run and deploy the source code, and then you have a build stage, which is an optional stage, kind of like for compilation and an object generation which is eventually going to be deployed. So the example is AWS code build here, and then you have the deploy stage. It's the main deployment stage where the generated artifact that you supplied or the output of the code build stage is kind of like deployed in this main stage. Again, AWS offers multiple tools that can be plugged into this stage. So code deploy is one such tool that we are going to look at today for the demo. So code deploy ECs blue green deployments are some of these things that can be plugged into this particular stage. So this is how a sample pipeline looks like we are going to look at the demo and get a closer look of it. But the bottom part is what makes up a code pipeline code commit action. And the code repository is being pushed here to an Amazon ECS, the elastic container service, and which is eventually deployed via a code deploy ECS blue green action in this pipeline. So now let's jump on to the AWS console, and then we can see how we can create a simple pipeline. All right, this is the AWS console home. So you have a list of all these tools here. So let's go to the AWS code pipeline. So on the left side if you see it shows like multiple substages that are kind of associated with the code pipeline, like the source, the build stage, and the deploy stage. So it gives us the ability to configure those stages using these sources here. So the first thing like we discussed is setting up the source repository or the deployment artifact. So I already have an S three bucket that I created and I have an object there. So we'll be using that. For the purpose of this demo, we are going to skip the build stage since we essentially don't have any compilation or object generation, but the code deploy stage is necessary stage. So let's spend a couple of minutes on the building a code deploy stage, which then we'll later plug into the code pipeline. So for the code deploy. So we need to create an application configuration. So let's go to the applications. So right now we don't have any applications, so we can just create a sample application, say Java application and the compute platform. As an EC two, the application configuration is created and now within the application we need to create a deployment group. So the deployment group essentially kind of like configures which all EC two instances you want to deploy. So that is determined using this deployment group. You can add more settings like how many EC two instances you want to deploy at a time and what would like a failure scenario look like. Those are like more advanced settings, but you can explore this on the code deploy resource page. Let's enter a sample deployments group. So the service role essentially needs to have the code deploy access, which I already created one, but you can fine tune based upon the level of security you want for your deployments codepipeline. And all these are environment configurations. For filtering the EC two instances related to the deployment group, it's often done using the tag associated with the EC two instance, for example, like I want to deploy to all the prod hosts, like Env is prod, so it filters out all those such instances and it creates a deployment group. And at the end of the day the code pipeline deploys to that. So all these settings, you can leave the default ones if you don't have any necessary changes. And then we create the deployment group. All right, the deployment group is created for a sample application that we have now we can go and create the code pipeline. So right now we don't have anything created in this section. So we are going to create a brand new code pipeline. Say sample pipeline again, you can leave the default options here. It's going to create a service role by default. If you don't give anything, it looks like the service role already exists. Let's see if we can delete this so that there's no conflict. So if you go to the roles. So this is the IAM page where you have all your service roles and the policies. So I'm just going to remove this for now. It's probably there from a previous time. So we are just deleting an existing role. All right, we deleted that. Now I believe we should be able to create a sample pipeline. And the source is going to be s three. And the bucket is going to be this. Just need to enter the object inside this bucket. We can quickly take a look what's the object inside there. If you go to the s three, and if you pull up the s three bucket, see that there's an object. So take this and plug it into the object key here and then build. We are going to skip. And then the deploy, we just configured the code deploy right. So we are just going to plug in all those. We just created the Java application and the deployment group and that's pretty much it. So once we hit the create pipeline, it's going to take a couple of minutes here and voila, the pipeline has been created. It triggered the source and this is how the pipeline looks. So if you have a valid source artifact, it kind of triggers that and then based upon the deploy configuration, goes and deploys there. So this is how a sample pipeline looks. The scope for this article next would be for the session, would next be like exploring more on the code deploy side and what is needed there.
...

Prithvish Kovelamudi

Software Engineer @ Marqeta

Prithvish Kovelamudi's LinkedIn account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways