Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone.
Thank you for being here.
I'm Omila Raju.
I'm a senior solution architect with AWS.
Today we are going to talk about AWS serverless application
model or the AWS SAM.
This is going to help you with your development and deployment of modern
serverless and cloud native application.
Let's get started.
AWS SAM is an open source developer tool that simplifies and improve the
experience of building and running serverless applications on AWS.
It's going to streamline your, serverless development cycle and
help you to quickly and efficiently take an idea into production.
So first of all, the exam is for modern cloud native apps.
How do you define a modern cloud native app?
Through the modern architectural patterns, like event driven architecture,
microservices, and use serverless services wherever possible to
increase the operational efficiency.
And have developer agility that is ability to deploy fast in an
automated way and have abstractions wherever possible to remove the
undifferentiated redundant work.
And of course, in on top of that, have the necessary guardrails and, use standardized
services so that you have your, governance still in place, along with speed.
And then this is another one that is used to use purpose built
databases and data sources that is right for your application.
So these are common principles of a modern cloud native app.
And today's focus is mainly going to be in these three areas.
That is serverless, developer agility, and how do you maintain governance.
That is DevOps plus serverless.
And see how can we build and deploy serverless apps with speed.
So the first thing before even going to think about which tool is right
for you, the main thing to know is your cloud operating model, that is
how do you manage and run cloud today?
Is it ready for your modern apps?
I want to show you some operational model frameworks.
So we'll start from the left, that is like traditional operations.
So you can have a look and see where your organization fits in this operation model.
So for, traditional, like a monolith application, like a front end middleware
and backend database, usually there'll be like different, teams like application
engineering, application operations.
Same way if you are deploying it in cloud, that will be cloud platform
engineering and operations team.
And then we have distributed DevOps or decentralized DevOps.
So most of, both of these are DevOps models.
it's like how it differs this, what is the level of autonomy you give to
your application engineering team?
to build and run their own components.
and how much of the control and governance you have with
the cloud platform engineering.
So usually, organizations start with distributed DevOps and move
towards decentralized DevOps.
And for building modern applications, using these kind of DevOps
frameworks is really needed.
So you need frameworks to deploy code and infrastructure to your
accounts, that is AWS accounts.
to, support an application runtime operations and optimization.
So when you come to serverless apps, how do you do that?
And what are the challenges?
The thing is like code and the infrastructure, though there
is not any real infrastructure, you still have to like, create
your serverless services, right?
And that with the code are tightly coupled.
Examples being these services, and there can be many more examples,
but the key ones here, that you create a lambda function and your
runtime and the code is within that.
And if when you use EventBridge, you provision an event bus and your
business rules of which sources come into the bridge and which
targets you're going to reach out to stays within the EventBridge rules.
Similar way, all the business function workflow is going to
remain within step functions.
So in that case, who takes the responsibility of what and how do you
choose the right infrastructure as tool that is going to help you with building
such a serverless and modern application.
So I'm going to show you some ISE tools because it's not just I'm going to say
that it's all can be done in AWS SAM because it depends on various factors
of what is right for your organization.
So we have AWS native options like AWS CloudFormation, which is in the YAML
format and all the features and services, to deploy them, are available, constructs
are available within AWS CloudFormation.
An AWS Sam, which is the focus of today, is built on transformation,
but it has various most powerful features and constructs, which is
going to help you specifically with serverless resources and applications.
And also a higher level, tool or a construct is the
AWS cloud development data.
So this is based on writing the infrastructure as code
in programming languages.
For example, if your app team, so when we discussed on distributed
or decentralized, DevOps.
Where the application management team is going to have most
of the ownership on the code.
In that case, you can use a, let's say your organization
is fully on a Python stack.
Then you can write your infrastructure as code in Python and use a
cloud development kit for that.
Another option is the open source tools like the serverless
framework or the Terraform.
So these tools are like platform agnostic.
So you can use it on prem, AWS or on other cloud providers as well.
So if you have been like using such tools, let's say Terraform, and
we want to continue to use that in AWS, you can very well, do that.
But one thing to highlight here is, so whenever there is new features
and services coming in, so it gets, available in, CloudFormation first.
And then, so since SAM is built on CloudFormation, immediately you can
adapt it into the same SAM framework.
But, it'll take a few, time for those, frameworks on, or new features
and services to be available and supported by these open source, tools.
So that is the, caveat, here.
So now with that, we'll see what are the, like I'm going to highlight on
where, Sam is going to be useful because Sam, as I mentioned, it's not just
IAC, there are many more things that we are going to see of, how you use Sam.
So we will focus on that for the rest of the.
session.
So first what I want to show you is a practical guide to using SAM
using a serverless JNA app example.
So first I will show you a demo of that application so that we
know that what we are working on.
Then we will see the architecture of it and then see how you build that
application in parts using AWS SAM.
So why I have chosen a JNA application is So it's when we say serverless,
it's not just, usual services like API Gateway, Lambda, DynamoDB, etc.
So I want to showcase how you can use the rest of the serverless, ecosystem
within AWS and bring it into SAM and build applications very quickly.
So let's get into that.
So this is a very, simple, Dine application.
So what we're going to do is, this is like an, for an event.
If, participants are coming in or the attendees.
So here I've taken our con 42 cloud native 2025 as an example.
So attendees at the after attending the session will give their name and
it will select any of the sessions that they attended and, submitted.
So when they submit, they will get a session summary about the
session that they attended and also they will be allowed to do
some feedback on that session.
So let's do that.
I'm giving my name here.
Let's say in this drop down you have the full list of all the sessions,
so I've just given four examples and I'm going to choose my session
as the example and submit it.
So once it submits, it goes to the back end, we will see the architecture next,
but for now let's just see the demo.
so it's waiting on a response.
So it is going back, to an actually a JNI model to feed the session name
and it is getting a session summary based on the abstract of the session.
So if you see in the con 42 page, there is a session abstract.
So I have loaded that into a table.
So the system will fetch that abstract and create the summary based on that abstract.
So that's the first part.
So we are using JNI for that.
And the next part is we will ask them to rate the session.
it's my session, so I'm just going to say it's an excellent session
and give some text feedback.
Like useful session to start with serverless apps, something like
that and submit the feedback.
So now it's again, waiting on a response.
So what we are doing is based on that feedback, we are going to create a
thank you note for that, participant.
So it says, dear Urmila, thank you for attending the session, which is
picking up the session name here.
And, and it does a sentiment analysis on what feedback that you have given as well.
And it says we are glad you found the session helpful and we are delighted,
that, to invite you for the future events, yeah, something like that.
So this one is also JNI generated.
So you see two JNI generated, text, generation example, here.
So if you, and the application works in such a way that if you want to generate
new content again for another session, you can click on this button and do
that or just finish to complete that.
So that's a quick example of this architecture.
So we will, we'll see the actual architecture behind it.
So just one thing to note here is what's happening is you are
filling in some details and you are sending it to the backend.
And from the JNI application perspective, whatever you give here, that is the
name of the session and your feedback here, all are like user prompts.
That you give to the engineer system behind.
And, using that there is a generic response that you, review back.
So that's an mo.
So now we will go back and see the architecture.
So this is the full architecture of this app.
Don't worry.
We are going, not going to build the whole thing using Sam within this, session.
I'll take you part by, part here.
So there are like, so though this is a full one application.
You can do it in a very modular way, and in an actual organization, there could
be multiple teams who collaborate with each other to build such an application.
For example, there could be an integration services team whose responsibility is
to, bring the data from your, let's say, from your front end to your, middleware.
And at the same time, take the data from your back end back to your front end.
So in this case, what's happening here is, you have an API gateway, through
which you get the, so if you remember the demo, whatever you type in the front end
is coming to the API gateway and going to a Lambda, and then it gets published
as an, events in an event bridge.
this is a fully event driven architecture, whatever response that
you see back in the, front end is not going in the same API gateway call.
So the API gateway call is just to publish the events.
So if you see in the backend, from, again, the results are published into the even
bridge, which goes to the, front end Yeah.
Through asynchronous messaging.
And there is a, I'm not going to explain each of the services here, but not in
the scope of the session, but you can use services like a WSI core to send the
asynchronous messaging to the front end.
So that's what is used in this design.
And let's say that a team is that they take the responsibility of
doing this, type of integrations for receiving the events and also sending
back the events to the front end.
And then there can be a business application team who manages the actual
business logic of this application.
So in this demo, you saw that there was two parts.
One is first it generated a summary and then it created a thanks note.
So both of these events gets evaluated in an event bridge rules.
Actually, there are like two rules.
I have just put like a one rule in this diagram and then
it goes to a step function.
So a part of what's happening in the step functions for now.
but this, why I have shown this other flow is, so this application though
we just saw a text summary, there could be other new features which
can be added in a very modular way.
So that, which sketch can be treated as another event and it
goes through another rule and goes to another, like generic model to
create an image maybe in that case.
So we are not going to see that part in the, in the session, but this is
just to show you how an event driven architecture can model, modularly grow.
And then there can be a JNI team who actually suggest on which model is
best for you and provides the API information to access that and all that.
And maybe there can be a front end team as well who is managing the front end code.
So now let's look at how to use a SAC.
And we will start in a very, simple way of just focusing on this
integration services team A, and then look at these three parts alone.
That is receiving the events from the front end through API Gateway, Lambda,
and then publish it into an, event bridge.
And how do we use SAM for that?
So SAM has two parts.
one is the transformation templates, which is the actual similar to CloudFormation
template, the YAML template, which can be used as the infrastructure as code.
And next is the SAM CLI which is used for like local development and
testing, building, packaging and also from there you can directly deploy,
create pipelines and lots more.
So we will see an example of all of this across the development cycle using
this Genai application as an example.
So as I said, it's built on top of CloudFormation.
So it's still YAML.
So the SAM template as well is YAML.
But if you look at this transform statement at the top, so when you create
a SAM template, it gets converted into a CloudFormation template internally when
you build it and then you deploy it.
So finally when you are deploying it, it is going to get deployed as a
CloudFormation stack behind the scenes.
So this is just an example of how a SAM template looks like.
So the notation for using, the, SAM construct is AWS serverless.
And if it is lambda, it's called function.
And, we have nearly nine, resource types, which we will see in the next slide.
and also pay attention to this, IM row.
So you can associate like managed policies.
So these are the another advantage.
So here this lambda function has to read a DynamoDB table.
So you just add the policy and give the table name as the reference.
so all this ref all are like usual CloudFormation
notations that you will use.
I will be using it all along in the session, assuming that you have, some
idea on how CloudFormation, works.
Yeah.
so this, what is the advantage of using this is, it is going to help
you define fine grained policies.
That is the, the permission of this Lambda is going to be restricted to
just this table and just for the read.
So that is built in automatically when you use this kind of managed policy.
And then when you use events, it is going to attach a API gateway.
And this one is for the, DynamoDB table.
So just with this code, a few lines of code, you are able to create a Lambda
function, a DynamoDB table, an API gateway, and also the IAM role for the
Lambda function to access, DynamoDB table.
So that's how these kind of high level serverless constructs.
So we have this available for nine resource type, which you can use.
And then reduce the number of lines of code that you actually
write for infrastructure as, code.
And, to make that into a bigger CloudFormation
template, SAM takes care of it.
And, yeah, so this is the, IAM policies, managed policies that I told, there are,
so there are 75 plus policies available.
So you can like, so this is the link.
So have a look at that, before like defining your own policies, think,
see that if any of these can be like just leveraged without having
you to, write in depth policies.
And, yeah, so here as events, because Lambda is invoked,
how does Lambda is invoked?
It's invoked only, at, at an event.
So the event can be API gateway as in this example, but there can be other,
events which is triggering Lambda.
So in, SAM, there are like 19, plus function, events that is supported.
So like example, if there is a file arriving in S3, that could be a
trigger to, trigger a Lambda function.
Or, a DynamoDB stream event that is a record being updated, deleted, et cetera.
And so all of this can be leveraged in SAM.
So I kept mentioning that, that SAM can be used across the development
cycle and it's not just for building infrastructure, right?
So as you can see, these are the various SAM CLI, commands that you can use.
And it is spread across the development cycle.
So whether you are a persona of an application developer or you are a
DevOps person, as we go along, you can see where you can use each of
these CLI constructs and for your own persona and the type of job you
do and making the application line.
So let's first start from the SAM init and, and then go along each of it.
for the SAM init, we are going to look at just this area, which is the API
gateway and, and Lambda and go from there.
This is my, ID that I use.
You can use any ID of your own, like VS code or, anything that you prefer,
which can have like code, terminal and ability to access your local files.
So in here, there are already some demo applications, but I'm going
to start one saying SAM init.
So all of the SAM CLI.
So this IDE, I've installed AWS CLI, SAM CLI, and Docker as well as a prerequisite.
Why Docker?
we'll see.
So when we do SAM init, it's going to ask you to give some QuickStart
templates and custom templates.
So if you already have a template, you can bring it into that.
But we'll do a QuickStart template.
And it has this all this, like serverless API, ThreadLine function, and we'll just
do this hello world example for now.
And, what are the runtimes to use?
I'm going to, give n, not use Python 3.
13 to show you all of the other things that is, available.
So you can choose anything that you need.
my, ID is a Python 3.
9, so I'm just going to use that.
And, we are not doing image type, we are doing, normal zip.
And all this, tracing and, monitoring, I'm not going to enable it for now.
And then you give a project name for this.
So I'm, we're doing, if you remember, we are doing team A, B and C, right?
So this is for team A. So team A app 1, I will name it like that.
So now if you see team A app 1, it has created a folder structure
with, various, subfolders.
And the main thing is this template, YAML five, which is t Sam template itself.
Yeah.
So I will use a terminal below.
From now on, we'll close this.
we'll go into our, a app one folder.
And if you see we have already got, the resources, which is, the
function is the Lambda function and the runtime that you chose.
And events.
It is, regarded by any a p. So all, everything is available already in,
and this out output section is to give some, idea on the resources created
when your start formation stack.
It's, deployed.
so when you finally have the resources deployed, when you go to the output
sections of the cloud formation stack, you will see the, API endpoint,
function in point and, function.
Im role, for now.
We are going to like, just comment it out and, and focus on here.
if you see we have the hello world and app.
lambda handler.
So that is the template and then for each of the function that you
create there is a subfolder in which there is an requirements.
txt file and the actual app code here.
So we chose python so the code is in python.
So there is some example.
So I have a code that is needed for sending the necessary information for us.
So I'm going to copy paste that code into this function and then
we'll see what the code has got.
So this code, if you see, what does it say?
Intentation is not proper.
Yeah, okay.
so this is a simple lambda function where it is going to get the
event from the API gateway and publish it into an event bridge.
So that's what it is saying and the input parameters that you have here.
So once you put it in lambda, the BOTO3, it's all like automatically
imported for local testing.
you can put that in your requirements file.
So this was the default one.
We don't need a request package.
We need BOTO3.
So I'll put that, here and save that, and close this.
And for the rest of the details, if you see, it is taking the event and
writing that event in a different format.
So it is giving a source column, detail type, and the actual detail
that is coming in the event bridge.
We will see what that detail is.
And then it is giving an event bus name.
So I have an event bus already created in my AWS account.
So in the interest of time, I'm not going to show the IAC or
template for the event bus as well.
I used this for a previous event, that's why the name is Cloud AI and Dublin.
So the event bus, this is the event bus, if you look in the console and
within the event bus, we have various rules to send it to various targets
based on what the event type is.
So we will see into that rules in a while.
Coming back to the console.
So you have your code ready now.
So the next step is.
Before you go and directly deploy it, SAM provides you to ability to do a
local invoke of this lambda function to see if the code is right, you have
built it before getting into deployment.
So that we will see as the next step.
Okay.
So the next piece that we are going to see is about the SAM local.
That is how do you locally test your serverless resources before
getting it into deployment.
you, for Lambda, as I mentioned, you need to have the events
from a particular service.
So you can simulate events using SAM local from each of the services that you see,
and then use it to locally invoke Lambda.
As I mentioned, all Lambdas So first of all, let's,
generate some, event, for that.
So for here, the source service, which is going to generate
the event is API Gateway.
So when you give this command, it is going to give you all of the
services that can trigger an event.
So we will choose API Gateway from here and include that as
a parameter to see the options.
here there are like four options.
it can, because Lambda, API Gateway can be, used to, integrate with Lambda as
a Lambda authorizer, which is the first one, and Lambda proxy integration or
HTTP API, proxy integration and so on.
so the way we have defined here in the template, about
is a normal HTTP API proxy.
But I want to show you how a REST API works.
So for which we need the AWS, proxy, usually how an API gateway, event
will look like is something like this.
So it will have the actual body, which will have the message from the
front end, and then, the resource parameter, headers and so on.
But when you use it as an AWS proxy integration, when it goes through to
Lambda, it is going to just send it as a proxy and just send this information,
that is the top body information is what will get to the, Lambda.
So instead of using this, full event for the testing purpose.
As if we are using a AWS proxy integration, we will
use this type of an app event.
So if you see this another file sessions, event.
So this is what is here.
So if you go back to our demo and remember my demo, we initially
created a summary of the session.
So the input from the front end was the session name and the event
type, which is generate summary.
So the event type is summary.
So this is what is going to go to the.
function.
So now if we do a SAM local invoke, hyphen e. So if you give hyphen e, that means you
are going to give the event as the input.
So I'm going to give the session event as the input to see how
the lambda function gets invoked.
So now it is invoked.
And yeah, one thing to highlight here is, When you are invoking this function,
do you see this one local image and then it is getting a public image from ECR.
So what is happening here is, Sam in the CLI, it is pulling a lambda image
and spinning up a local container.
So remember I mentioned the docker query site.
This is why we need the docker here so that it will create a local
lambda image and run your code in that and provide you the answer.
So you've got the response here now.
As a response code 200 and what is being returned is the actual event that you
will be passing on to the, event bridge.
So it is going and publishing an event in the event bridge, but I
put that in the response object with just to see what is being fed.
So now we have successfully tested the Lambda, but we have tested
it through the existing small configuration of an, normal API.
Now as I mentioned, let's see how it can be changed into a REST API
with even more complex parameters.
So if I show you in an console, an example API, let's take this one.
So this is a REST API, which has a post integration, to a Lambda function.
And you have, various other parameters configured as part of this, a PA
con, configuration, such as from the Lambda response, what is the
response score that is being, returned and, what are the response headers
being, a returned and allowed, and the response body pipe, et cetera.
So if you want a proper configuration, everything done through Sam and without
touching the console, then you, need to code each of those parameters.
So an easier way of, how it can be achieved is one way is if you have an
example, API, in the console, you can go to the, deploy the stage and go to
the stage actions and do an export and export it in an open, I open a p format.
And since we are going to use it in, Sam, choose the YAML format and
also export with the API gateway, extensions so that you get the full
information from the A-W-S-A-I gateway.
And, and then export this API.
So when you export it, you will have something like this one.
So I've exported it and put it into a yaml file in our project root folder.
Yeah, it's here.
so this will look like something like this.
So you are going to, have the regular OpenAPI format if you have, if
you're familiar with OpenAPI format.
So even if you are exporting an external API from some other system and want
to put it as an AWS API gateway, this is an easier way of doing it.
You get, you just get the OpenAPI specification and
use it in your, SAM template.
So what you are doing, so this is an external file.
So we have our template file separately and we are having the API.
yaml file.
So you can, use all the information here, as is.
So if you see, this is the one that I mentioned.
All the API Gateway configurations like the header methods allowed, response
codes, everything gets copied into this as is and the main thing that you need
to change here or customize for the new deployment is this credentials.
So this credentials is the role that API Gateway will use to
access the integration points.
So in this case, the integration service is Lambda.
So do not hardcode it, like the way I have done here.
Use an, parameterized, attribute, which because your role itself you might
be creating in your SAM template, as part of the same template or you're
fetching from another template.
so use the parameterized, attribute rather than hardcoding.
So credentials is one that you need to customize.
And the next one is the URI.
URI is the actual, end point which you will call from API gateway.
So here the endpoint is lambda.
So it is, the syntax is in a particular way.
So have a look at it in the documentation and, and then you have
to have the actual lambda name here.
Again, do not hard code it, but get the actual name of the
lambda from your template and parameterize it and put it here.
So once you have done this, now you have to use this API, YAML
in your actual template YAML.
So let's see how we do that.
so if I go here, just a minute.
So what we are going to do here is we have to introduce another resource.
So far we didn't have a separate resource type for API Gateway.
So now as a REST API, we are going to add this.
So if I give, I've given a name REST API and the type is this one.
So this is the serverless construct used for REST API.
And I'm giving a stage name and in the definition URI, I'm giving the API.
yaml file.
So we need not define anything else in the template, we are just
using the OpenAPI, YAML as is here.
And in your lambda function, also we need to make the changes so that you
will refer this API resource in here.
So for that.
I'm, going to change this part.
So what I have done is, so I've given API event type and, as the registry API.
So I've done a reference of this API in here and I've given a path
and the method is going to be post.
that's all we need to define here and I'm going to add a few more things,
but not go into the details of it.
Those are the information of the, IM role and, IM policy that is required
by the, API gateway and the Lambda.
I'm just putting it here, but not going into the details of it because this Lambda
has further, needs permission to, write it to the, event bridge if you remember.
we have examples which is the events.
Put event, permission we are giving to the Lambda.
So all that I am, adding it here.
And then, I've saved this.
And just to show you that it is, how it is, working.
I'm going to do a local build and, deployment.
So I'll say what happens in the build, and, deploy in the coming slides.
So if we do a SAM build now.
Sam will package all of your code and dependencies that you, had in
the inside this hello world, folder and it will create a package.
So now it will, to just to check if your, package has created a template
correctly, you can do a Sam validate.
So this will not do a deployment.
This will, So just check, if, the template is right.
So here actually we are, getting an error.
Let's see what the error is.
So API proxy function, okay.
So changed everything, but we had this name as still as hello world
function, but, everywhere I think the roles that I have defined is
using it as API proxy function.
So I'm going to change this function name as API proxy function.
And do a build again and do a Sam Valley data game.
Now the template is, valid.
Now we will do a deploy to see if, everything is getting deployed correctly.
So use this when you're doing it for the first time.
Use the guided deployment, so that you'll see all the steps.
Umac name, you can give a name, or if you do leave it blank,
it'll just take the defaults.
confirm changes I give as, And role creation, give it as yes, because
you want to, you want SAM to allow to create a role in order to do this
deployment for rest of the things.
So this one give yes, because we haven't defined any
authentication for the, API gateway.
All this was default.
So now it is, waiting for the change set to be, created.
So it will define all of the, so if I show you here, so this template is
going to define so many resources.
That is a policy and, so here you can see the resource type, IAM policy, Lambda
permission, policy actual function and the API gateway deployment and everything.
So the create, now the create is in progress.
Now if we go to the cloud formation,
so this is your stack that is in progress.
So you can see the various events happening.
And once that is complete, you will see it is create complete.
So all of the resources are now deployed.
And then just to test your API, you take the, let's find out where that new API is.
Actually when I did the recording, today is, you can see the date on
my computer, it is 28th February.
So this is the latest one that has been created.
So I go into this and it has created a DevStage.
So if I go into this,
So this is the invoke URL that you have to use to call this API.
So let's try
a POST to this URL and we pass the data as the events file.
that we were using so far in our testing.
So you can see that it is deployed correctly and you are getting the
response from the lambda function.
All right, now we are seeing how build works.
So build what it does is quickly at each of your resource,
mainly the lambda function.
It will package your dependencies and code, and even there is
a concept of Lambda layers.
In the interest of time, I'm not going into that.
But if you have like different layers, you have to finally create a package of it.
And that's what SamBuild is going to help you with.
And after that, we are going to see about SamSync.
So SamSync is something where you can, you have your deployment, initial deployment.
And then you sync just your, code, a piece of it into your resources
without doing an actual deployment.
So this is fine for an, for a development environment, but
do not do it in production.
Use proper pipelines and deployment methods when
you're doing it in production.
But when you are testing, but you want to keep changing the code and
pushing it into a test or a development environment, then Samsung can be useful.
So now actually to see how Samsung works and where it is really useful, I want to
now shift the focus to the second team, which is the business application team,
where if you see from the front end, we just checked on how, the Lambda function
emits the events into the event bridge.
So from the event bridge, there are various rules I mentioned
and the summary event, which we have been discussing so far, it
actually goes into a step function.
So step function, as it has like various tasks and it has
a workflow needs to be defined.
So how can we like easily do the, step functions for deployment and, samsync
actually can be very useful, in, in that, as I mentioned, it creates a
temporary layer and it syncs, your, code into your Lambda function,
which without an actual deployment or a stack implementation going on.
how we can, use, make use of that mainly in, step functions and also in the open
API, specification that we just saw.
So let's say you have a initial step function or a API gateway
deployment with an, and we will just talk about step functions now.
So let's say you have like just one state, in your, workflow and you have deployed
it and then you create, the further, steps and then you want to push it and
see how it updates in your, console.
If that's their aim, then SAMSYNC can be very useful.
Okay, looking into the step functions now.
So far we have been talking about how local development helps you a lot.
But step function is that one service, which is easier to get started with on the
build within the console because it has got a very nice visual experience where
you can drag and drop services into your workflow and configure your parameters.
So this is our step function in the demo that we are, using.
So if you see, it is getting the session abstract using the
title from a DynamoDB table.
And it is invoking, Bedrock and, passing output of, the, Bedrock
model into again an event bridge.
And then we have, same thing for, the thanks node generation.
and then there is a process like, it can repeat, wait for the user
input and, repeat the same thing.
That's why you see the looping, process in here.
So once you do this visually and get this workflow created, then
it is easier to take that code into your SAM template rather than
building everything from the scratch.
So this is the design of the visual studio.
So there's a code part here.
So this gives, this is called something called the Amazon state language or
the ASL is again a, yeah, my love, sorry, Jason, if you are noticing.
So you can copy this whole thing and put it into the file.
So when I put it into a file, it looks something like this.
So I have it, in my project folder.
So if you notice, I am now coming to a different folder, which is a TB
project space, and which is having a separate template YAML file as well.
So this is the full ASL of the step function.
But let's start with a smaller one, with just one, pass task, just to
see how it is getting deployed.
So this is our, a l do J file.
And this is the template file.
So in that template file, I have defined the resource.
So this is the construct for, defining the, step function or the state machine.
And similar to the, API gateway definition, URI, you'll have a
definition URI, where you will give the location of your a SL file.
And then you are giving the role of the state function and, and the policy.
So the policy, as you had seen, it needs policy or permission
to call bedrock models, DynamoDB table and all that is defined here.
So I'm going to try to build this, template and then deploy this.
So the build is complete.
Now I'm going to, deploy this.
So again, it will create a, CloudFormation, stack and it will
get deployed in your, account.
And So I have run this previously so it is saying, it is going to modify the
state machine as per, my, sorry, yes, as per the changes that I have done.
So right now this is this one, step that we are going to deploy and, it
is getting successfully deployed.
So now if I go to the state machine, I exit this one,
yeah, let's see, yeah, I think this is the one.
If I edit, it should have only one state.
And this file has got the full step function with all the states.
So I will put the name of this JSON in here.
Or let's do the other way around.
We copy this one.
And put it here.
And do the same thing.
So what has happened is it has completed a build on its own.
And it has synced to the code.
So if I go to the step functions now, go and edit, now we should
see the full step functions from that one state updated immediately.
So in this way you can, once you have the full workflow from the studio,
then you can sync up the code and get it deployed in a very easy way.
And now to show you how the step function is actually invoked,
let's do a SAM locally invoke.
Now I'm back to the team A app one.
And we are doing that API invocation to the lambda which publishes
the event in the event bridge.
So back to that.
So if we do that again once more, it is giving an output.
So what it is actually doing is, is publishing an event into the event bridge.
So if I go into the, event bridge, so if you see this is the event bridge rules
and this is the event bus, so we have various rules set up and the rule, which
matches this event is the summary input.
So event pattern is something like this where the event type is summary.
So in here the target is a step function state machine.
So if I open this state machine now, you can see that has been invoked by
the event bridge and you have this one execution in running status.
So if I open that, whatever in green is what is the completed states.
So it gets the input from the event bridge.
And, it fetches the, abstract from the Rhine Modi table using the session title.
so you can see the input and output in here.
So the input is the actual, JSON, which is having the data,
summary, and your, session title.
So then it creates a prompt in the next step, to be, passed to the Bedrock model.
So it is passed to the, Bedrock Anthropic Class 3 model.
So the prompt is this, write a short and concise summary of the session
titled so and you get a model response as a session focuses on optimizers
and this is what is displayed as the final summary in the front end screen.
So the execution is in pause now as you can see because
it is waiting for user input.
Remember the demo, we did this one summary and then we had a second
step for creating a thank you note.
So you are giving some feedback.
So the step function is designed in such a way that it awaits and resumes.
When, the next set of input comes in.
So I'm not going to the detail of those, design, but it's a
very interesting one to, look at.
So we have seen about SAM sync and also about, SAM deploy and
the differences between them.
so far we have seen about the local environments and how it can help
developers and testers, to progress the, serverless application with speed.
So though you can use SAM deploy to put your resources in any environment,
there needs to be some governance and control over who deploys what and where.
So when you have like various teams doing various parts of the
application, you need to have an oversight on how this can be managed.
So that's where the proper continuous integration and continuous deployment,
the CICD methods comes in handy.
and also, so we were talking about like various teams maintaining
their own, templates, right?
So one good framework or model could be having a nester stack that is, each
team has their own employee, template and deploys them in their own, local
environments or, within the AWS accounts.
and then when it comes to higher environment like integration testing,
there may be a central team who takes control, modules these templates
into a main template and then deploys it into one, environment.
So how do we do it, do those kind of deployment in a very streamlined
way is where we use SAM pipeline.
So SAM pipeline is the way to, as an abstraction over your CICD process.
So it has the ability to integrate with the CICD systems like AWS core
pipeline and also third party systems like GitLab, Jenkins and Bitbucket.
So let's take, GitHub as an example, which is what we will
see in the demonstration as well.
So if the, developers are pushing their code into a GitHub repo, how do
you set up a workflow, from the GitHub repo to be deployed, the resources to
be deployed into your, AWS accounts?
So there can be a staging account, there can be a different production account.
You can set up your pipeline in such a way that it first goes into a staging account.
And maybe you set up a flag for a manual approval and then it goes into
a production pool, or you can have a fully automated, pipeline, as well.
So this is a normal CACD process, but how does Sam makes it easier is, using
this bootstrap step, first of all.
So when you use a boots strap for a ritual, CACD, process that you use, what
you need is the necessary permission and rules for allowing, the tool to go and
do the deployment in your, AWS account.
SAM pipeline bootstrap will take that, role and create necessary, resources,
which is the buckets into which the code has to go into and the necessity
cloud formation and, pipeline execution roles in each of your, deployment,
accounts will be automatically created by the SAM pipeline, bootstrap.
And after that, it will also create the workflow file, which needs, which has
all the information to pull the, changes from your, repo like GitHub or, GitLab
and pushes that changes into the staging and production or how many other stages
that you have deployed in your, pipeline.
So this is the concept.
We will see a quick demonstration of this before, closing the session for today.
Let's do a SAM pipeline in it, with the bootstrap command so that we can set
up the pipeline in an interactive way.
So we choose a pipeline template, as I mentioned, there are five
different CICD systems supported.
We are going to choose GitHub actions as an example.
do you want to set up now?
If you choose no, it gets to reference other bitmaps.
We will set it up.
So give the stage name.
So you're giving it as, a div.
And then we are using your role, which can be used to go and deploy
the resources in the AWS accounts.
So I'm using this two default option because I have a profile
in this IDE, which has the ability to connect to the account.
And, I'm going to deploy both stages like development and
production in the same account.
But when you're setting this up, if you give two different, roles,
then, or two different profiles.
That will be used by the pipeline to go and assume that role and deploy
the resources in each of the account.
And enter the region and default is IAM.
And all of this IAM user, you need not create, this will be
automatically created by the pipeline.
So pipeline execution role, CloudFormation execution role and the bucket artifact.
And this one you have to give n because we are not doing
an image write plan function.
And it will ask for to summarize whether you are okay with creating all this.
Yes.
So after it has set up the development stage, it is going to ask the same
set of parameters for the next stage.
So you can do more stages if you want.
In this one, we are showing two stages, development and product.
And if you see it has created the access key and secret access key.
This is an important parameter which has to be set up in your, GitHub.
repository for this workflow to, work.
So say you have to save this in a safe place.
And now we go into the second stage.
So for this name, we give us pro and, choose the, rule all the default,
rule creations, no for image type and confirm the above values.
Should we proceed with creation?
Yes.
Again, this will take some time, so while it is progressing,
let's go to the GitHub repo.
I have created a repo called com42cloud and here go to the settings and
secrets and variables in actions.
Create a new repository secret, for this, you will create the access key ID and the
Same way,
do it for, the secret key and the value.
So it's now added.
Now go back to the, yeah.
Shen is name.
Complete file.
so give a stack name.
So we are giving team A dev pipeline.
So that will be the stack name for the development pipeline.
And for the production, we will give it as a team, sorry, team A prod pipeline.
So we have created a dot GitHub, workflow, file now.
So if you go here, so workflow and a pipeline, file created, here.
So I have done an initial push from the IDE to this, GitHub
repo, so the team A app one.
So you can find all of the, resources from the, app one folder in here.
Okay.
Now, if I go into the, IDE and make some changes here, push, let's say, and
then add that change and do a commit and
do a push.
So now the, not just the changes would Pushed in here.
But if you go to actions, you would see a pipeline in here getting started.
So it is in the test start, then it'll go into build and package.
It'll, deploy and, an integration test and then go into Deploy and Pro.
So this step deploy test, when it is completed, you will see one stack
getting, created in the cloud formation.
So something like this app one, and, Sorry, like this, team A development
pipeline and team A production pipeline.
So after each of this is completed, you will get the stacks deployed into that.
And this one, where you see the build and deploy feature is when you submit
any changes into a feature branch.
So that will get deployed as a separate pipeline.
And, when you merge and, do a pull request, from the feature branch into
your main branch, then it will get sucked into your, actual, testing
environment and, production environment.
So this is now the build is complete.
Then, the deploy is for the testing has started.
So you should be ideally seeing, stack in here, soon.
So while that is, happening, let's go back and do a feature, branch, commit.
let me check out into a branch called Feature 1 and I will do another
change, testing feature branch, yeah,
and add this,
Feature 1, commit this, and you push this into the feature one branch.
So when you do this one, so this pipeline is already deploying.
but there should be another testing feature one branch that
is created here, which will get deployed as a separate pipeline.
So now if I go into the CloudFormation stack, so you can see that the team A
development pipeline One is completed, development pipeline is completed
and the prod pipeline is in progress.
So why this you are seeing it as an update is, I was testing it
out since yesterday on and on.
So it is going and updating it in the, same stack.
Otherwise you would have seen it as a create in progress.
Okay.
So the, let's go back and see the, testing feature branch.
So I have this done in, Another repo, I will show you that in the interest of
time, let's not wait for the other one.
So if you see when you are testing feature branch, it will go into
this path and get deployed.
And after that, when you do a pull request, it will then pull the
changes from the feature branch into the main branch and get deployed.
So we can do the pull request after the feature branch is completed.
So as you see the feature one, it's now an update in, progress.
So it is like separate set of resources created for each of the, pipeline.
so that we do, since we are doing it in the same account,
there will be, duplicate.
But when you do it in, in a particular example, you'll be doing it with different
roles and in different AWS accounts and maybe with also, checkpoints.
of, not to just directly go into production, you might have some
testing, which is like automated testing or manual testing followed
by approval and then going into production, the usual CICB, process.
So now the, feature one is, completed.
And now if I go here and do a, pull request, create a new pull request.
So I'm doing this, feature one.
so this is the change that I have, made.
Okay, create pull request, confirm, create pull request, okay.
And you are now merging.
So now once you merge, automatically in the actions, you should be seeing another
pipeline triggered where it is going to start the test and it is going to do this
flow this time and put it in production.
I'm not going to wait for this to complete, but hope you get the idea of
how these pipelines can be combined with some ham pipeline and, HOPs has been
useful to you in, seeing how you maintain, your SAM environment and templates
for different teams independently.
And, use SAM to, not only deploy, infrastructure code, but use it as a CLI
as well to, for your local, building, testing and, testing your deployments.
And then finally, if you are in a DevOps role, use pipelines
to, deploy them together.
So this really speeds up the way that you build a serverless application.
And as you can see, the serverless landscape is a huge, so there
are so many other services in the AWS landscape that you can use.
And, you can bring everything under the, SAM template and,
speed up your deployment.
And when you are bringing it in the pipeline, you're, you're just
bringing everything into the, deployment phase very easily.
And also if you see this architecture why I had chosen is to demonstrate
the fully event driven mechanism.
and in this part we didn't see that is, the image generation, so that is
maybe an extra feature set that, your application can have and, deploy.
I want to end with giving you some, useful resources.
So this serverless LAN patterns contains a lot of, starter packs, and there's a
SAM template with different run times in, in Python, TypeScript, Java, et cetera.
You can just, clone those, GitHub repos and, and have a serverless,
like if you want to have even bridge triggering, step functions, there is
a pattern for that which you can use.
The next one is, YouTube series.
from one of our senior developer advocate, his name is Eric Johnson.
So this is a series of, YouTube videos, each one going in depth and
going into a hands on session, to explain about each of the SAM feature.
So if you want to like, really get started with, SAM and know about
each of its features in detail, this is a very good, YouTube series.
Thank you so much for listening to me today.
Connect with me on LinkedIn and keep watching the space on my LinkedIn feed.
I'm planning to put the app code and the SAM code into a GitHub repo, so
that you can have a try of your own.
Thank you.