Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello and welcome to one more session at Confucius to cloud native 2025.
I'm excited to welcome you again for the second year in a row.
And today I'm having a very special session for all of you fellow cloud
enthusiasts for the cloud integrators, cloud developers, cloud engineers,
and just for anyone who wants to power their company with the power of
communication, which can be provided with the Amazon simple email service.
It's a Netrunner's guide to AWSS hacking communication barriers.
And I'm excited to have you here, but first of all, servus from Vienna!
And, I would like to quickly introduce myself.
I'm, Dmitry Hladenko.
I am a cloud engineer of AppIT and, how my colleagues like to say, Mr. Amazon.
I'm AWS community builder and I'm user group leader in Vienna.
I'm also speaker of community days, local user groups.
So AWS community is my favorite part and this is why I'm doing
this speech again for you I have a master's in telecommunications.
I have a bachelor in the business management and I have tried lots of
different things in IT and I'm so happy that I have an opportunity to
land here in cloud and Do all these exciting things to power business logics
to achieve success together to share this knowledge and to build really
massive Things with the social impact and yeah, I started with the games.
I tried the networks, back end development, lots of interesting things,
but with AWS, I'm since 2020 and, in AppIT, I'm running and operating our
AWS environment, I'm assisting with the software development, using cloud
technologies, I'm planning infrastructure, I'm planning architectures, I'm performing
the maintenance, and I am advocating the cloud in the company, so, this is why
it's so important for me because it's really a thing that I love to do, and for
now it's a part of my professional life.
Currently, I am holding six Amazon certificates, also the Terraform one
and the other ones, and, hopefully for the next Conf 42 I will be
sitting in the golden jacket.
And, on my private life, yes, I really like theaters, but I'm a big
motorsport fan and photography hobbyist.
But also a big IT geek and IT engineer in the heart.
So quickly about UpIT We are one of the most biggest and well known Austrian media
companies because we are having a backbone We are running all the media production
for the biggest Austrian media not only Austrian but also for the partners in
Dach and Switzerland and also we are providing IT services software development
and Basically, we are doing everything Everything to power our journalists and
communication colleagues to keep going, to deliver a great business value.
And, we are doing lots of different interesting things.
We have a critical systems, we have different engineering approaches.
So, and this is a very non standard topic and I'm very thankful for Apatek for
all the support and opportunity also to have a good reason to present it here.
And, yeah, so let's, go straight to the point.
I would like to ask you, so how do you think, Is the success achievable
if we are not communicating?
Definitely not.
So in this presentation, I would like to build around the philosophy
about the idea that communication that doesn't just send, it delivers.
It delivers your meaning, your opinion, your knowledge, your brand, your
solution, your business offering.
You can deliver the words to someone you love, but how to do
this from the cloud perspective and what you must take into account if
you are working with Amazon SaaS.
I would like to touch the most important things when you are starting
to working with Amazon SaaS, which, because this is a simple email service.
But some of the things they can be overlooked or just can be found
out with the hours of working.
And as I have mentioned on the previous slide, currently I'm sending over 500,
000 emails daily for the different systems, for the different customers,
for the different prosposals, marketing, transaction, whoever like kind of emails.
And I would like to sharp your attention on the security triangle,
which is important for configuration.
We will have a look on the metrics which are, possible to make the
maximum from the AWS SES usage.
And the most important, how you, as a developer, as a cloud architect, can
approach, the most efficient sending that you can get out of Amazon SES.
So let's just quickly have a look all together on the email sending actually
well It's email itself seems to be easy, but it's not first of all Let's look
for from the business perspective if you have an efficient communication your step
ahead of your competitors You are able to Push your product more efficiently.
You are able to stay in communication.
You know what is happening.
And actually that we are also doing in upper, we are monitoring your
company, whatever, what is related.
We can push your things.
And for us, so the opportunity to deliver things.
It's a core value of all of our business structure from the beginning to the top.
And, from the first picture, we would like, like, it's a normal
situation in the dark countries.
Every time I receive a mail in Germany, especially for expats like me, it's a
huge stress to have this paper later.
Is it efficient?
Is it not?
So we know we, I think not all of us like the bureaucracy and
here, since we are building money.
It's important to have an efficient business.
And thankfully, Amazon says it's the best solution that can empower
you to do this and, have your, your, your speech delivered.
What, what is another important?
Sadly, not all the people in our world are fair.
That's why we have a very famous spam can.
And here I will pass you through the essential things that you have to
perform to have your message delivered.
Otherwise, you will end up in the spam folder.
Personally, I don't want to have you in this situation.
And what is also important here, we will, I will provide you example, how
you can convert the statistics to the, information, which is representative
for your manager, from your product owner, and for the people who are
not even touching the cloud topics.
And the most important here, I like the word developers.
They are all my favorite people.
And here we will cover.
How you, like a cloud architect, you can make their life easier.
So, let's quickly proceed to the general overview of the SES.
So, we are using it in the core applications that we run in AppIT.
We use it for AppOTS, our brand new picture service, AppAPix.
We use it in our analytics platform.
We use it for the colleagues in Germany, for the Press& Monitor
and lots of other things.
That's why we are having already, like I would say, even four or
five years of SES in production.
So, and what I admire the SES a lot, that it's a very reliable thing,
if you know how to approach it.
And, when it comes to the integration, SES is very flexible, because you can
approach it like a normal email sending back end that you are just using SMTP
either, what actually we did in upper, for example, our media contact loss is
fully built around on the single service.
So the core of the whole business platform is Amazon says, and everything, the rest
is just the lyrics because to, to form the final product and, What is important
from the SAS, you are not just performing the blind sending of the things you have.
Events, you can form all the metrics out of it, and you just need to come to it,
take these metrics, and then you have a full opportunity, whatever your business
desires to, interpret this data, and yeah.
So, and, regarding the robust integration, what SAS is actually, it's a very good
example if you are new to the cloud.
And you would like to perform event based systems, or you would
like to try the message blocking, or message streaming, message
processing, or any kind of serverless.
And SAS actually can be a good start point for your pet project.
And, yeah, what I really like that, you can work out different examples,
and it's a really well known case, so you don't have to go to the
databases, see what is happening.
Or you just working with a measure of things, which is on the probably
basic level of understanding.
And it's very good to, and but the most of the changes that are coming
with the Amazon SaaS is simple ML service is just too simple.
So here, you are having the full responsibility as a cloud architect,
as a cloud engineer to take your responsibility and Take you use
what is provided by the service as I already mentioned in my previous
presentations if you Familiar or not.
I'm always considering AWS as a swiss army knife and It gives you too many
things like a lego blocks that you can combine to achieve your business value
And sas is providing the bare minimum.
It sends you It does the sending And it gives you an
opportunity to interact with it.
And later I will show you a few examples for interaction.
So, and, yeah, as I mentioned, it's very, basically to start
sending, it does not require much.
You can be ready in the probably half an hour, hour, and, you can, Just start
sending but you need to get understanding of the metrics You have to make sure
you're you're compliant to perform the sending and you are not having much
out of the box recently Amazon They provided virtual deliverability manager.
They also introduced the tool to verify configuration They also made says able
to also receive messages, but here this presentation is mostly focused
from sending the information from you So, and, yeah, as I mentioned before,
SAS integrates with different systems, with, we even integrated, solutions
which are running on premises or third party applications or custom solutions.
So SAS is able to act on any layer of your architecture.
For us, rate limiting, which is normal for APIs that Amazon provides,
is not a stress anymore, and I will share this, this as well.
Very long search for the last time frame and there are a few approaches
that I'm happy to share with you And we can see what are the status of
our sendings we control the bounces and for us the most important to have
them as message delivered and says provides you an opportunity to analyze
the customer satisfaction and Since we are able to monitor the activities.
We are responding very fast to delivery issues.
We can manage the suppression Applications are able even to control
the status of the sending So what is Provided by the simple service.
It's not simple.
It's great and it gives you lots of opportunities so,
let's start with the sense.
So what what I would like to take attention So you must get an understanding
what you will be sending Yeah, and if you are working with a business or you're
integrating to the your application it's important to consider either you
have access to the email verification, either you would verify the domain.
My personal recommendation is that you're verifying the domain.
First of all, you have more opportunities to ensure that you have a bit better
deliverability, it's more professional, and you can use different, basically all
the subdomains, all the names, whatever you like, and you're flexible so you
don't have to come back to the console.
So you have to think.
that how much do you send?
Where do you send?
How are you sending?
Because, it's very important to work with the basic metrics that Amazon provides.
Amazon gives you the sending limit.
Amazon is providing you the statistics for, your complaint rates, bounce rates,
and it's very important to comply with it.
And, Regarding the sandbox, if you are starting with the SASS,
you are having the sandbox.
Our practice is that we are keeping the test environments in the sandbox.
Production, of course, goes to the production.
But, it's very important for Amazon, when you're coming to the
supper, to save your time, to save the time of the fellow colleagues.
Please come with a text about your use case.
Please express what are you going to do?
What is the, why do you need the SAS?
Please provide the example of the mail.
Please provide some information on your business.
If you will provide all these details, it's important.
And another important.
I think is that your email body is containing the unsubscribed link.
So you are following the fair usage of the SES and then you are able to Get pretty
stable and get a reliable sending out of it So if you are communicating out of the
sandbox, you must fully express What is your case and you don't try to fool them
because it hurts the account reputation.
It hurts everything Please be transparent Express the business value.
How are you so all the information as much as you can is very important And
then you can get out of the sandbox And then if you have a good statistics,
you can have a higher sending rate You have you can have higher sending
limits and to be honest even for the higher sending limits It requires some
strengths with the support communication.
For example, even One big customer of us, I had to convince for a few times
the support that they need the bump.
So yeah, please take it seriously.
And, yeah, actually why I'm talking about it.
This is a real life screenshot of my Outlook spam folder that I took
yesterday when I was preparing the and finalizing the slides.
So we can see that we have a narrative which is a cool startup.
They have an amazing solution for the photo selection.
Bono is a store where I buy some things.
ErsteBank.
Thank you very much for keeping my money still.
Yeah, and we have, Luminar Neo, the fellow colleagues actually who have
amazing also photo solution because I like photography and it's very sad.
All this information is staying in the spam folder and if I would not check the
spam I would forget about the narrative.
I would forget about the Luminar Neo because those people they are coming to
you telling Hey, we have these features.
We want to know the feedback.
We want to know that our product is developing But you cannot hear it, why?
And, it's a crucial point about SASS application, but I will tell you now
that when you're coming, for the SASS, and, you first of all must be, get
ready everything, that what you're sending, how you're sending, from what,
where, and, My, I really would like you to avoid the situations like here.
This is why we have, we'll have the next slides.
So what is amazing about this is that it's pretty flexible and you can generate
the normal SMTP credentials that you can use in Thunderbird and actually you can
use Thunderbird for the testing of SMTP credentials and I, before you put it to
the actual application and or Things.
There is a big manual on the Amazon documentation that telling you actually
how to create this SMTP credentials You must put through to the console
and we don't like ClickOps Actually, my user group colleague Linda Mohammed,
her main speech is that from the ClickOps to the DevOps and I don't
know all the fellow colleagues there.
There are from based there, Automative, CICD, ALS, SDK, whatever you like and
I really would like to share one thing.
If you scan this, QR code, you will have, yeah, you will have a
CloudFormation template that you can easily convert to Terraform if you like.
I know you can do it.
And probably there is another talk about the Terraform that provides you
the CloudFormation template for the infrastructure for SAS SMTP credentials.
So you don't have to pass it through the console.
You can even It's, for example, integrated with, secrets manager.
You can perform the rotation and you can allow your applications that are, you're
running on AWS, to fetch this credential.
So it's pretty secure, fully automated, and you're like mitigating the part of.
Yeah, that you're doing a mistake in the console.
And, actually what you must do, SMTP is a very minor option, but what you have to
do is to use AWS API because, SAS provides you opportunity to read the metrics.
If you're working with the message IDs, you can get
information about your sending.
If you are, you can use all the third party services provided
by this inside of the SAS.
And, so, yeah, and you can basically perform the complete integration of
Amazon Sass, like a complete back end layer inside of your application.
And also what is good to consider to use the IAM user separation with the
least privileges, as it has to be done, but you are opening the full potential.
If you are using Sass with the SMTP, you are limiting yourself, but SMTP usage
is So, an opportunity, if you need to show the reliability manager, and if
you have to copy the email that you are sending, because in our case, what
we are doing, we are using the shadow copy in the CCC and, basically we are
sending it back to this another address.
And then we store the copy of the message on the S3 and it allows us together with
the logs that we are storing from the SAS, to analyze any kind of situation.
So, in any case, you must consider both approaches.
and but your plan is to use the API.
So what is important?
Now you have your domain, you have your application, everything in
the SAS starts with the validation.
You cannot even start taking it coming out from the sandbox because you
need to like maintain your sending, try out the sending to see how it
behaves and You have only 72 hours.
Why?
Because when you create the identity, you have 72 hours for validation.
Then the DECIM keys will expire.
So let's go through what Amazon is giving to you.
When you're creating the identity, Amazon provides you for the mail with
the DECIM keys which are validating that This, domain is, really belongs to you.
And, what is also important is to use the custom mail from that you
are separating, descending to the special part of your domain, which
is fully intended to be verified for the usage with the Amazon SaaS.
And, I, sorry, I jumped out, actually regarding the DECIM, you have an
opportunity to, yeah, to use the EASYDECIM when Amazon generates the
keys just for you, and you have an opportunity to bring your own, but since
we are working with hundreds of the customers, and I'm basically, in some
weeks I'm doing these verifications daily, yeah, EASYDECIM Is all the way
to go because Amazon manages the keys.
You don't have to rotate it.
We have a 248 bit key Which is three parts of the key and they are rotated.
So Just use the easy key.
Coming back to the custom mail from so, it's a very crucial point And, you must
use it because, all of enterprises, they have either Gmail, they use Microsoft
365, and it's one of the important metrics for verification, for the alignment
with the SAS that you are saying, okay, this is Amazon SAS, I know, I give
it to my domain and it can sending.
The emails out of it and which is what is amazon is not Disclosing then when you're
doing the just a normal verification by email you You must just also perform this
thing on your domain if it for example email belongs to you and not for the
gmail one Poor, I don't know outlook yahoo hotmail gmt ukernet, whatever you use.
Yeah, and this is crucial and, yeah, so, also what is important is to
provide the SPF, it performs the alignment, for, for, for this, and
you're also giving permission to Amazon SaaS to perform the sending.
And one of the practices for domain validation that we have done is that, we
are using some, like, subdomains because it allows you to comply with, for example,
some of the enterprise security policies.
And, Have the SES configuration separate and isolated just for the SES.
And does not have any kind of influence on the main domain, which
is used for the normal sending, for example, from the employees who
have the outlook on this domain.
Yeah, and one of the latest parameters, which is the most interesting, is to
use, DMARC, and regarding the DMARC, so, extremely important to, have it, because,
what DMARC is doing, so, for example, I'm Mitra, and there is another guy who looks
very familiar to me, and he claims to be me, because, yeah, I think the ground is
not that bad, and, Yeah, but for reality, DMARC says, well, this guy, he has a,
like, kind of signature that he is Dmitry, this is another guy, he cannot be Dmitry.
And DMARC, for example, every email that are coming from me, it
validates it that it really comes out from me, and I give permission.
And for example, DMARC can be approached in a few different examples.
That either you, put, any kind of, yeah, you can apply DMARC to some percentage
of the emails, which is a good practice, when you are just starting with the SAS,
when you are debugging it, when you are, trying out how stable is your delivery
rate, if you are satisfied or not, and, you can, also, then make it more strict,
but what is important for when you're validating SAS on the subdomain that,
you're using the keys as you see on this slide, that, you are able to segregate,
the configuration from the main domain.
You can also define different parameters, feedback, and if someone tries to
spoof you get notified and z mark.
is extremely important together with the custom mail from for successful delivery.
And okay, it's very nice we have done it, but how can we ensure
that it's correct correctly?
And I would like to express my kudos to the owners of themailtester.
com.
It's a truly amazing tool, one of the best from which we all we have tried.
So you are having your SAS in place, domain validated, you made it in 72
hours, you either have access to your DNS, because you have to put the DNS
records, either you can do it yourself in your domain, or you can, if you use
Route 53, it's even easier, you can do it.
MailTester gives you email, you perform the sending, and then you see if you
have the headers aligned, if your template is fine, because you can use
also templates with the SASS, and just fill in the data with the JSON inside of
your application, and just debug all your sending and expose the potential issues
with the validation before you start the real life backend production sending.
So and since I'm more oriented on developers today I would like one
crucial point to tell you about if it says out of the box It's
not compatible with the Apple.
Apple actually, they're doing great things.
I really like them, but There is an amazing feature called Apple private
relay when you can hide your email real email from some shady applications, shady
services and by default if you just Send to their relay, you will have a bounce.
And you don't want to have a bounce because you might still
communicate with your customer.
And you will have your suppression list.
Actually, this is a list that stores all the, bounced or unreachable
identities that, we are taking care.
And I will tell you later how.
And, so it basically means that SAS is not delivering any messages.
What is important is to go to the, after you have set up your domain, custom mail
from, you are going to Apple Developer Console, if you, for your application you
are giving a permission for private relay, please refer to the Apple documentation
because it's not Apple conference.
And, you, you, Align it.
In some cases, if you missed it, you can write a lambda to clean up
the suppression list, for example, using Bota3 and to include to search
any emails with a private relay.
And because you cannot do it quickly, go through the console.
There is no path selection.
That's why you have to do lambda, but it's.
Very important to tell to, to Apple, to in your developer console that you are
using SASS and validate those information.
So the main custom will inform and also provide the SPF.
So going forward, yeah, when you're working with the SASS, you might see when
you're browsing in the console that you're having some already prebuilt metrics.
It's not enough.
That's why, I, it's important to work with the events and, make them,
set up a login, form the CloudWatch dashboards, form the metrics.
But out of the box, SAS uses, this kind of, metrics.
So, for example, like a bounce rate, complaint rate, delivery rate.
delivery rate, I think it's very easy to maintain.
It, it's, mostly, mostly tricky to handle with the bounce rate.
But if you are working, for example, you are sending out from the database.
And, yeah, so some of the emails can be already like
outdated or something happens.
You can refer to the logs, kick it out from the suppression list.
So, the main strategy from complaint and bounce rate that you are analyzing
your logs, you are alarming yourself and, you have to maintain this level.
Please be fair, because out of the box, Adalas SES is sending
from the shared IP pool.
A shared IP pool is some pools that Amazon has in Ireland, in the United States, in
Frankfurt, in any kind of the regions that are sharing your sending rate capacity.
And also you are sharing the reputation with the other users of the SES.
Yes, you can have the dedicated IP, but, but going forward, to the
dedicated IPs, you actually, it's an option for the big enterprises.
And I would say that for your startup, for your private usage, for your
application, if you're in below the 100, 000 emails daily, then you're
simply finding the shared pool.
You can have some nasty things because, for example, Google performs throttling
of SES emails time to time, and we noticed it, then we receive a copy to
our upper server immediately, and to the Gmail it arrives in 15 minutes.
Yes, it's very nasty, but you cannot predict it, and every time you're
working for the different SES identities.
yeah, but if you're an enterprise, you can have a dedicated IP that you
manage yourself or managed IP by Amazon.
Price difference is only 15 and 20 monthly.
I would opt in for managed IP, but you still have to maintain
like consistent sending.
So you're established organization, you know, what are you doing?
You, you have a complete control over your reputation.
But you must use third party tools to control your presence in the backlist.
Amazon provides QAP since I, you know, that IPv4 pool is
already exhausted ages ago.
And, yeah, so you have to check yourself because sometimes for some stupid
reasons, you can get inside of the blacklist and then you are, your setting
is taken away from you because some enterprise scanners, I don't, I don't
like Icarus, Symantec, or yeah, like.
Like, whatever else is on the market, they are, taking into account those blacklists
and, yeah, you have to control it.
Amazon does not provide it, but there are enough tools.
You can even script it yourself.
We have our own because we have, it's already been a while.
So, but when you're starting with a dedicated AP, another important thing
is that you are pre warming the AP.
You are not jumping, I would say, like.
like a lion on this IP sending million emails to the whole world.
I don't know, like, I love you or whatever you like to say.
you start slowly.
You send 2000 emails weekly, 5, 10, 15, 20.
You can have, for example, what we did for one of our customers, I did, we separated
their sending on the different SaaS accounts in the, and their organization.
And for example, we use SharedPool, SharedPool for the smaller
branches in the, another countries and for the Austrian branches.
started in some hybrid mode, to send some parts with a dedicated
IP, some parts with another one.
And yeah, so please have a look on the slide, but you can start
slowly, but it's a good option if you are established enterprise.
And so going to my favorite part, so which is actually a core of this, why
I wanted to make this presentation.
please don't forget that Amazon is a cloud platform and it's
intended to be for professionals.
If you need to easily send emails, you can use the MailChimp, but MailChimp
is not a professional solution.
You cannot, it's not that flexible with the Sass.
And, if you are working with a AWS SDK for Java, Go, Kotlin, I don't
know, Python, whatever you like.
And there are lots of languages.
It doesn't matter.
please don't go directly to the SendMail API.
You could ask me to demo, why?
Because there is a function, just draw it and it will send
it and everything is good.
No, it's not good because it's a plain call without of any
protection, without of anything.
So, yes, you are doing the call and if you are complying with your rate
limit, if you are complying with all your service limits, which is
provided for your account, Yeah, so you can send it, you will have luck,
it will work, but after some time, when you will grow, you will notice
that I'm having some delivery issues.
Emails are gone.
Emails are missing.
But why?
Because you're hitting the API throttling.
And core idea so for the next few slides is that we are approaching
the rate limit and rate limiting is especially important because actually
this is the amount of power that Amazon gives you to SaaS because without the
rate limit you cannot send anything.
So just don't do it like this.
I will tell you how to do it.
Yeah, so, and just more, some more details about the send limit, it's a bit confusing
because I got different responses from my account manager in Amazon, I got different
responses from the AWS premium support, I, we also recorded and observed different
behavior and also clout, whatever you use, I don't know, your AI preferences,
they are, so the truth is that.
You can send one email object, for example, and your rate limit will be
counted from the number of recipients on the point of time one second.
So, if your limit is 32, for example, you can send in one second one
email that contains 30 recipients and second email that has 2.
If you go over, there is a small burst of send limit, but no one knows, even
AWS documentation doesn't reflect how big is it, you will hit the API limit.
Yes, and, you must take in account that this is a crucial value and
it's your capacity that you can plan in your application for descending.
So, and if you look on the rate limiting, I would see that two perspectives.
We either perform the logical separation from it, so we like, you know, let's
say, some butter on their bread, or we are approaching the capacities that,
for example, when you are establishing the relationship with your partner, you
are coming and you are working for this.
And, if you talk about the capacity slicing, this is what I mentioned
before about our customer.
This is actually one of the approaches.
We have a service that works for the different regions and
with different identities.
And for example, we say for Austria, Germany, we say, we send from
this identity in this account.
30 emails per second, for example, I don't know, Ukraine, we are sending 15 from
another account from United States one, United Kingdom 10 or Switzerland also 20.
So this is a logical separation of the share pool, but because
for the higher rate, you need to use four or five dedicated IPs and
it costs you money because it's already, it could be 150, 200.
And for redundancy, you'll always have to do with at least two.
Sorry, I forgot to mention it.
And, Yeah, so and approaching yours limit is about our work of the cloud
architects with the cloud also together with the fellow programmers and Yeah,
we have tried different approaches Lots lots of safe on the internet, but here I
would like to combine two main pathways So this case it's the most easy it some
of them people call it leaky bucket that you are Defining the capacity
with who you are throwing the emails.
The problem is that it's not flexible approach.
It's So it's basically shrinks you in this limit and you cannot get more of
the sense that you could potentially say it on the send on the unit of the time.
Yeah.
And for example, how, what we are doing, so when we know it's
for the applications that we're working with one single recipient.
So for example, we have a rate limit of 30 and the application
is working in OpenShift.
VCS, whatever you like, like, with the two pods, two workers for the application,
we are hard coding the limit 15 by 15.
And, we are, packing, chucking, chunking all the recipients inside of application
that, for example, we have to send 20 emails, and we have the only 15 per pod.
Bot creates a chunk of 10 and and five, and using precision
timers like resilience, four J four Java, or leaky bucket for
Python, you are doing the sensing.
And the good point is that you are basically decorating the standard
assess, API calls and you are sensing.
The problem is that, for example, in auto scaling systems, this approach
would require some expansion taking into account to limit its additional work.
And there is still a chance that you can hit the throttling because you, and if
something goes wrong, for example, if some connection with AWS is broken, or
if you are missing some, so something happened, you are still losing your email.
It's a good approach when you have repeatable pattern of sending, and it's
a very good approach when you are having just one scenario and it's pretty stable.
So you can do it.
It's actually already must be a case.
of the basic, it's already the basic usage of the SES that you are saving yourself
from the potential API throttling.
So this is the scenario one.
Then I started to look around for some approaches, yeah, as I said,
it's honor of our community members who are working with the serverless.
we are moving the sending logics to the cloud and, from, it's also good from
the perspective that, for example, for us, you're having the unified sending
platform and you don't have to re implement the same things a few times in
the code or maintain some custom library.
You are, you are just abstracting the sending to the cloud level,
not from your application level.
And here, so we already have the SQS QE with the dead letter QE.
Actual sending is performed from the lambda.
Here it's called heartbeat, but it's not the heartbeat.
It's actually just performs the sending of the code.
Yeah, and here you can see it's also expands for the event processing.
Speaks, you've mentioned it a bit later.
So Yeah.
And, the good approach is that you already have a chance to do resending
if something happens and you are PT safe because, you can, ask Lida to come
consume the messages in the rate which will comply with your current rate.
And Lambdas can also scale.
So, and before it was not a thing.
And, Yeah, with this approach you already have some baseline reliability.
The drawback is that you have to re implement the actual SAS
logics in your lambda, rather than going directly with the library.
But basically you are just migrating the code from your application to the lambda.
And, yeah.
So it's one of the things.
and uh, it was very nice and, it was already working because
it, improves, additionally the reliability of the s setting.
And then I started to think, okay, like, we are sending with the Lambda, Lambda
could run a few, a few different, I don't, 20 few, 100 of the Lambdas, for example,
if we have a sending rate of 100, but we have one Lambda working in one second.
It's still like, Not utilizing the full capacity of our sensing limit
and what is also important is that Yeah, there is still a chance to go
over the limit if something goes wrong because if you miss some fine tuning
or you have some Unexpected behavior.
So I started to think first of all, I thought about the time thankfully the
good fellow colleague from AWS, Dennis Traug, actually he's On the left,
on the picture, third from the left.
And I'm also in this picture in the white tee-shirt.
so, he wrote a very nice article how you can use the step functions, for
precise triggering of the lum functions.
It's a first point.
Then, mark Richman and Guy Lovely.
They expressed a few additional approach if approaches for the sensing.
So, and, then I hit a work from Mikhail Har, who also pointed out
it, it's a very creative approach for usage of the features of AWS service.
What he did, he did, use the AWS, Lambda concurrency and, basically he's feeding
the emails to the SQS feed and, Lambdas are launching as much, for example, for
the sending limit 50, he can approach 50 Lambdas, but there is another problem.
It goes back to our, to the first solution that I told you about, that you
are, working only with one recipient.
And, you have to ensure this.
We have different cases and I started to think, Okay, looking to
the Demis, looking to the Mark, to the Michael and Guy, what can I do?
And let's have a look.
So, I took all the best from those approaches and, we are creating the,
the orchestrated pipeline for the sending with, AWS stem functions
in the core, because this is the pipeline that we are intended to
run in one second, and, EventBridge precisely triggers every second.
That's why I called it a heartbeat.
What is important that, our users are sending, sending,
sending, application collects it.
So it goes to the email queue.
We are not taking those emails for sending directly.
We are taking them, we are reading the recipients, and we are
dynamically, with the heartbeat, we are changing the lambda concurrency.
So we are delegating, as I mentioned before, for example, 20 here, 5 here,
5 here, then we are Baseline or limit.
So I think you got the idea that in the one point of second we have
dynamic situation that it's adjusting Based on the email flow that we have
we are we cannot exceed the rate limit We are fooling the capacity.
We are mitigating the latency which comes They, actually could be very noticeable,
especially in the first and second case.
And, yeah, so we are keeping the rhythm like in the Brazilian dance or
whatever you prefer, of the sending.
And it's a dynamic based on the service, what you're sending and you
don't have to perform the chunking.
You're basically sending the thing for the, for the sending.
And it takes care of you.
It can be complex.
It may require some additional programming, additional setup.
But this is the idea what we are currently started recently to use.
And we are excited that, for example, before we were sending
about 900 emails to the customers, in case of some special case.
it's our broadcast list.
saying that, basically, yeah, some, yeah, there is an info for you.
And it was taking a few minutes.
With this approach, we can, do it in a few seconds because, we are just filling
the full capacity with the copies of the emails with lots of recipients.
And going back to the solution from the, Michael Haar, so please Google it.
If you know that you are sending one email, you can go with the rate limiting
in your code, either with this solution.
If you're having more advanced cases, please build something like this.
You will really like it because, you don't have to stress, so you
don't have to come to your manager and say, We lost the emails.
Sorry, sorry, sorry.
And yeah, so, and, this is the way you have a second chance to send it.
You are utilizing the capacity, you are minimizing all the risks
that by default you're having from the function of the SES, yeah,
just to call this email sending.
So, yeah, this is most important part.
and to let's make some conclusion.
So chunking and rate limiting.
One user or one recipient, stable sending, stable pattern, no
need to do cloud architecting.
SQS plus DLQ is also for this scenario, but you must be more confident
that you don't lose the email.
Lambda concurrency, that you want to maximize the output and limit the latency.
and you're creating the heartbeat step machine, because you need the dark horse.
And it's actually what I mentioned in the beginning that you are working
with it like a service integration.
And as I have underlined in my previous presentations, you're always considering
your AWS architectures especially the serverless one like a waterfall.
So it's a series of events, it's all flowing and this is how are you doing
it and basically yeah here you're flowing the emails, you're processing
it, you're using the capacity and you're delivering a good business value.
So everyone is happy.
yeah.
So, also another very important thing I would give you.
So we are approaching to the end, sadly.
I hope you still enjoy it.
so that says, as I mentioned, it has the events.
Events, they are crucial.
And, it's so, what is the most important thing you have to know
about your email is the message ID.
In the message ID, it's basically a stamp, your unique identifier for
your email, so, and, you can use it to, like, to, for instance, like, an
identity, like a passport, I don't know, I don't have it to show you.
Yeah, but you know it.
So, that, you're working with this specific email, because then you
can retrieve the related events.
And if you're activating the events in your, configuration set.
In the sets, you can control the delivery, you can control bounces, complaints, you
can control, you can control openings, have a tracking pixels, control the
links, control the timing statistics.
So it's very crucial, actually, information for you from technical
department and also for the Have an understanding of the roof management.
That's why sass is way cooler than any kind of mail server That just throws you
the emails and yeah, so and you must work with Evans and first thing that you do
You're setting up the SNS SNS Yeah topic you're giving the permissions and says
Then throwing this to the lambda and just please five lines of code I've asked a
lot to do it if you don't know how it will save your events to them CloudWatch and
then you have an opportunity to create alarms, use metric filters to, use the log
insights and you have the overview that actually is happening with your sending.
What are we also doing is that applications are, knowing in the database,
those message ID and then they are tracking the statuses of the emails.
It's, we use this data to actually, to reflect the function of the
applications, one of our applications.
So just a single fetch.
of the JSON event also opens you lots of opportunity from the basic logging, from
the observability, from the other thing, from the have a better understanding to
for the business logics and probably also to create the data sets and you can also
with the data that you're Coming out from the SAS, you can improve your user date.
For example, create the reports for customer success or customer care,
and they can, they can communicate with your customers and together
with the proper configuration, for example, sensing approaches, you will
maximize the output from actually that you're doing the sending, that you
are sending the information, it gets delivered and, it's what you must to do.
So, and, yeah, I always say that ALS is a Swiss Army knife.
It gives you services, it gives you tools, it gives you lots of things that, you
should do actually and use, and, yeah.
So you can construct whatever you need, you just need to couple the services.
And you have basically, if you know the CloudWatch, if you know SNS, if
you know SQS Dynamic db, and if you know Lambda and you can program,
for example, in Python, you can do already like eight 80% of the business
cases that require your AWS Logics.
And this is the case with the assess.
So some architectural inspir.
So, this is an example when, we are actually sending through
the SAS, it records the events.
What is, is actually very good is that, yeah, we put those events to the
CloudWatch, then we are building the shared dashboard, which allows us to
control the sendings for the identities.
sending rate.
Have a visit, visitation of the failed recipients.
So even the person who are not, like familiar with a s you can
say, Hey, please come here.
I have prepared for you information.
You are working with the event.
It converts to the readable information that the delivers the value.
And also, for example, there Lambda, what, what I do here is.
Is a even breach trigger.
And, once a week it's, it was a big, solution for, for
our non-technical colleagues.
So it comes to the logging back bucket, collects, for example, all the bounces
or, for example, in the events you have also of utilization, including technical
information of the sense information about male body but not the body itself.
Yeah, it's sad, that's why we, do the copies, as I said
in the beginning, somewhere.
And, yeah, so you're sending it, and, we, for example, we have a report, for
example, with the customers who are having, failing deliveries, and since this
company, the, the delivery of what people purchase as a core, it's very important.
And it's a quick solution.
Then the clever solution was that we started, we worked
together with the developers.
Actually, what we were doing, we were processing with a special
separate lambda, those kind of events that are coming from the SES.
And, for example, I'm sending to email, I love darjohnny.
gmail.
com.
So yeah.
And, for example, I don't know, my email is blocked, but I have this email
in the used in the, my user profile of this application provided by this
company in their product and, SES.
to the tells to the Lambda, hey, so, we have this situation,
Lambda makes a report.
The DynamoDB.
DynamoDB is amazing for these cases.
You don't have to run Aurora.
You just do the table, use the TTL.
Use, the, close selection.
You can have a global table.
Even if you care about the latency, but it's not the case here.
Through the app sync, because you can have GraphQL, you can have the, you don't
have to build the whole API gateway.
Here we used app sync because it's just easier.
And this is the crucial things about AWS that you combine things.
And yeah, so we go to the application saying.
Yeah, email is wrong because we know the SMTP calls to which CloudWatch
reacts and then by the metric filter We trigger the lambda that does the
process and then user gets a banner.
Hey, please send to us your information.
That's cool.
But Yeah, and we make the recording in the CloudWatch.
We visualize it in the QuickSight and QuickSight customer team comes
here, sees what's the situation they can respond to the cases which
are not processed by the by the customers themselves and it's amazing.
So another thing what was here that Yeah.
And, it was, for one of our things, it's al also an example of, the Q is, usage.
So when you are send sending the email, you always have a recordings
of different stages, send delivered or bounced Complaint opened.
And, we wanted to use this information about the email.
So what we did, we, we have a lambda, we are reprocessing it to
the format that we need and we sent it to the qa and then for example.
application comes, he knows that, for example, it has, information with such
message ID and it says, like, yeah, that we have to do, so this message ID status
is updated and this is how we reflect this information inside of application or, for
example, here, it's also another our case that we are performing the sending, we
have, for, for example, yeah, like object of the sending, email that are Thank you.
spread to the customers because in one of our applications, you can send, for
example, your press release or your media information, using our databases
to the partners, to the companies, to the journalists in the media
contact plus, there are lots of them.
What it's very simplified.
It's just for not to overload you.
And, the SAS is already preparing the information, which is used by application,
which is runs in our case on the ECS.
And, uh.
yeah.
So you don't even have to implement this logics on the application level.
You can already prepare the data, both for the real humans and for your
application and base based on the statuses that you can fetch from the ces.
you can, track your limits, build a business, statistics presented for your
customers, for your owners, and so, yeah.
And the last point here is that even with non-technical familiar people.
One of the amazing services is Amazon QuickSight, and in Amazon
QuickSight, what you can do is, it's the best point of contact with the
cloud for the non technical people.
You prepare the visualizations, you can process your data also with the
ETL processors, with the glue, or if you use kinases, whatever you have.
So you just create the dashboards, and people are coming like,
I don't know, in Microsoft.
Excel, they are having filters, statistics formation, especially
together with Amazon Q.
They can work with the data that you're taking out of the SAS, and
it's very crucial for the customer.
Training time is minimized, but QuickSight is one of the best, actually,
things that Amazon is offering.
And if you want to conjunction non technical people with the work that
you are doing, and one of the best ways is to just Ship it to the quick site.
You will really like it.
So, sadly, we are approaching to the end.
So, yeah, if you are working with the SAS configuration, it is a success.
Take care about the DMARC.
Take care about the SPF alignment.
Use the custom mail from, please separate the SAS configuration from the
configurations from your main domain.
It's actually a requirement of the big enterprises.
So, the current spend of knowledges are very aggressive.
And you have to take everything into account.
To not to destroy the value of your sending reputation is important Please
you are using the shared IP pooling then please be fair or don't break
the reputation of the IPs For example, some of the servers used in the DDoS is
they are just VPCs by some providers, but they're already blacklisted and if
So, please, use fairly what you get.
Sending rate is much more important than you think and please approach
it in some ways that like I described and if you will know a better
way, please let me know we can.
I am always open for any discussions and, yeah, like with any service, just
Come, architect, be creative, use the knife, combine the services, build
event based systems, build serverless systems, integrate it to your legacy
applications, build the bridges, reuse the API gateways, app sync, you have
lots of opportunities to get maximum fit and this is our work to do.
So, going to the end.
I would like to thank you who is already here.
So it's a pleasure.
I want to thank everyone.
So AppIT, Conf42 for this amazing opportunity.
I really hope it would be, it's useful for you and it will give you some inspiration
that you can do, that you can achieve.
Please contact me on LinkedIn.
Please write me emails.
Please just write to me.
I'm always happy.
And we have a few opportunities to meet in person.
Please, first of all, join me at AWS Community Day Italy
on the 2nd of April in Milan.
And please also Come to the AWS community, Doug, that I'm organizing together with
our Ford of Orion team, and we have 30 speakers, two keynote speakers, lots
of amazing sponsors, and it's a good opportunity to talk about the SaaS in
person, but, in general, that's it.
I'm wishing you all the best in your cloud adventures.
Please do the architecting, not, hesitate to do the great things you can do it,
and we will meet us either online, either offsite, but in the clouds.
Thank you very much and we are going forward.