Conf42 Cloud Native 2023 - Online

Your Lambdas, In Rust!

Video size:

Abstract

Rust is taking the software engineering world by storm, but how does it affect serverless? In AWS it’s not even a supported runtime, so how can we even use it… and should we even try to do that? Spoiler: yes we should and it’s actually quite easy to get started with it!

Summary

  • Luciano is a certified solution architect and also a Microsoft MVP. He works for a company called fourtheorem. We are especially focused on serverless, on AWS. If all this stuff is something that interests you, feel free to connect with me or send an email.
  • serverless is a way of running applications in the cloud. Developers only pay for the actual usage of their application, not for the infrastructure. In my opinion, serverless increase team agility. But still, it doesn't scale indefinitely.
  • AWS lambdas is the fuzz function as a service offering that you get in AWS. It's basically a unit of compute that looks like a function. Lambdas can be triggered automatically by AWS when specific events occur. You can also invoke Lambda manually.
  • The cost model is generally a function of allocated memory multiplied by time. Running a lambdas for 15 minutes will cost you less than a cent. What about cpu? Cpu gets automatically configured proportional to the amount of memory that we allocate.
  • When you write a lambdas function, you generally call that a handler function. This is the function that represents your business logic that needs to respond to an event. There are a lot of languages that are supported. You can even write your own custom ones.
  • You can get rust by doing a custom runtime written in rust. In this runtime you can also embed your own functions. This can have different pros and cons. The con is you don't really get a lot of security updates. But as a consequence of that you also get better performance.
  • Rust is a relatively new language. The first version was released in 2015. It is focused on performance and memory safety. It can be extended with third party commands. There is a great ecosystem. I'm quite impressed by the level of maturity of the tool can.
  • Rust combines the performance characteristics of rust with the ability of being a language that allows you to be very efficient memory wise. Cargo lambdas can cross compile for Linux. There are no null types and there is a great system to deal with errors. With rust you might be saving a lot of money.
  • Cargo Lambda allows you to build serverless applications in an easier way. Sum is basically a yaml based infrastructure as code tool. It gives you a slightly simpler abstraction and shortcuts that will take care of generating cloudformation code.
  • AWS sum works with cargo lambdas. Sam is trying to support the rust ecosystem. I recently built an application that allows me to see if there are earthquakes close to my family. Check out the repository if you want to try it out.
  • With lambda and Rust, we are trying to maybe put a little bit more effort into learning a new language and writing code that is more optimized. Where do we find the sweet spot? It's not something you can kind of define in advance. The best way of finding thesweet spot is by trying different configurations for your lambdas.
  • Learning rust is not easy, but is it worth it? My personal answer is yes. I've personally been through this journey of learning rust for the last probably three years or three years and a half. Learning rust on its own, it's a good investment.
  • So that's all I have for today. If you want to grab the slides, there is the link again. Check out my book and give me some feedback. Reach out to me and let me know what you think.

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hello everyone and welcome to this talk called your lambdas in Rust. So before getting started, I'd like to share the slides with you because I'm going to be showing you some code examples. I'm going to be giving you a bunch of different links so if there is something that might interest used to you will have the slides ready for later. And as you scan the QR code or go to that link you see there, I'm going to introduce myself. So hello again everyone. My name is Luciano and I am can AWS Serverless Hero, a certified solution architect and also a Microsoft MVP. I work for a company called fourtheorem. More on that in a minute. As a senior architect, and you might have seen me in the node js field because I'm the co author of this book called Node JS Design Patterns. If you have read this book, not relevant for this talk, but I'd be really curious to know what you think about that. So definitely feel free to connect with me and let's chat after this talk. I'd love to talk with all of you. So, about fourtheorem. We are a consulting company. We are especially focused on serverless, on AWS. We are also AWS consulting partners and we spend a lot of time helping customers doing cloud migration. We train their teams, we help them to get started with the cloud. We have been building a few interesting high performance serverless applications and we also helped a few companies to cut down cost of their own cloud expenditure. If all this stuff is something that interests you, feel free to connect with me or send an email to the address you see there. And we are also hiring. So if that's something you want to discuss again, let's have a quick chat and I'd love to know more about you. We are very committed to content creation. We create a lot of articles, but we are also very keen in publishing weekly episodes of our podcast. This is just one of our episodes, so if you are into AWS, you might want to check, but our podcast and let us know what you think about that. So enough presentation, let's get into the talk. And the agenda for today is I'm going to be trying to explain what serverless is and what are the main benefits. In my opinion. We're going to be going through what is AWS Lambda and the pricing model. We are going to be discussing why rust, why it's so cool, and we're going to look at specific tool called cargo lambdas and I'm going to be showing you some examples. Then we are going to be discussing how to use cargo lambda with another tool called Sam. And finally, we're going to be looking at some lambda tuning and some closing does and questions that you might have. So let's start with serverless. What is serverless? It's always very, very hard to describe what serverless is. So I took a different approach, a more modern approach, I dare to say, and I asked Chat GPT, what the hell is serverless? And I was actually surprised that it gave me a quite good answer. It's a bit long, so I'm going to try to focus on the main points. So one of the main points is that it says that it's a cloud computing model where the cloud provider manages the infrastructure and automatically allocates computing resources to execute code in response to events or requests. Another good point is that there are no servers to manage or provision. And another one is that developers only pay for the actual usage of their application, not for the infrastructure. And finally, examples of serverless services include AWS lambdas. So when we talk about AWS Lambda is the most common serverless service. Okay, if we want to summarize all of that in a nutshell, serverless is a way of running applications in the cloud. And of course there are servers, we just don't need to manage them as developers, and you pay for what you use. And finally, you have small units of compute which are generally called functions or functions as a service, and those are triggered by events. So there is an event driven model that is kind of the default way of building serverless applications. Now, serverless brings some benefits, in my opinion. The first one, and probably the most important one, is that generally because you don't have to think so much about servers, how to provision them, how to size them, how to keep them up to date, how to install security patches or whatever, you can focus a lot more on the business logic. And this is generally because I think still serverless today is not yet perfect, meaning that there is still a little bit of learning curve, there is still a little bit of infrastructure as code that you need to write, and you don't get to focus as much on the business logic as I wish serverless would let me do. But I think this is kind of the general principle of serverless, and it's going to be getting better and better. So as a kind of generic concept, we can say that the ideal serverless implementation will let you focus more and more on business logic rather than everything else around the infrastructure. In my opinion, serverless increase team agility. And by that I mean that there is a learning curve initially. So once you get over that learning curve, teams generally can be much more independent. They can ship with a very good frequency. You can easily change things around because the units of compute are smaller. So it's easier to swap one function and rewrite it entirely, or change the language, or try different things. Or maybe you want to test a new idea. You can just ship a new serverless project. You don't need to rely on existing infrastructure. So in general, the serverless approach gives you many, many opportunities to ship things fast and try new things. And if things don't work out, you can try new things again without having too much at stake that you cannot change anymore. And the other point is that you get automatic scalability, and this is kind of true, meaning that server scales quite well. You don't really need to think too much about if you have a sudden spike of usage, how do you scale your servers? All of that happens automatically. But I, but there a sorta, because there are still some boundaries that you need to understand. Of course, serverless, different cloud providers will have different limits or quotas, and it's important to understand how these limits or quota work because they will affect how much you can actually scale. So there is a quite good default level of scalability that works out of the box and you don't have to think too much about that. But if you really are a heavy user of the cloud, and you might have like thousands or hundreds, thousands of invocations, there will be limits there that you need to understand and you need to figure out how to overcome those limits. And it might not be very trivial. So in a way, serverless scales much easier than more traditional deployments. But still, it doesn't scale indefinitely. AWS we would like to think so all over. I will say that serverless is not a universal solution. It is a great solution and it can work well in many situations. But you need to appreciate what are the trade offs, the pros and cons, and take a judgment call and decide whether you want to use serverless or not. But for the sake of this talk, we're going to be assuming that serverless is a great use case. So we're going to be seeing more about how does it work and how we can use it, how we can use it with rust and what are the advantages of all of that. Now let's focus a little bit more on AWS lambda. AWS lambdas is the fuzz function as a service offering that you get in AWS, and it's basically a unit of compute that looks like a function. You write a function with some inputs and some outputs, and this function can be triggered automatically by AWS when specific events occur, and you have to define which events your lambda is going to be triggered by. And some examples can be an HTTP request. For instance, if you use an API gateway or a new file in S three, so somebody is creating a new file, maybe a user is uploading something and you want to do some processing, you can trigger a lambdas starting from that new file appearing in an s three packet. It could be a job in a queue. So you could implement a pool of worker by just using lambdas, and they will keep pulling jobs from a queue and execute on those jobs as soon as there are new jobs available. Or you can create complicated flows by using a tool like step function where basically you are orchestrating an entire flow and calling different lambdas. And every one of these lambdas can be responsible for a particular step of that flow. Or you can even run a lambda on a schedule. Maybe you want to trigger a backup, or you want to ping a specific web page, maybe to check if there are updates. You can create a schedule and you can trigger that lambdas, for instance, every hour, or every day, or every weekend. And finally, you can also invoke lambdas manually. So, for instance, if you want to just create a workflow and you want to trigger it manually, you can just create a lambdas and trigger that whenever you feel the need for that. Now let's look a little bit more at what is the cost that the cost model, the pricing model that you get with AWS lambda. So the cost model is generally a function of allocated memory multiplied by time, which basically means that when you define a lambdas, you need to allocate a certain amount of memory, and then when you execute a lambda, it's going to execute for a certain amount of time, and you are basically paying a value that is proportional to the memory multiplied to the amount of milliseconds that your function has been running. Let's actually see an example to understand this a little bit better. If we allocate 512 megabytes of memory for a given function, I don't remember exactly which region, but one of the regions will have this price, $83 per millisecond. So if we execute this lambda for 15 minutes, which by the way, is the maximum amount of time that a lambda can be executed for, the final cost is going to be multiplying that unit per that amount of ram by the number of milliseconds that exist in 50 minutes and we end up with $7. And I had ruined the joke there. But yeah, basically you can see how running a lambda for 15 minutes will cost you less than a cent. Now what about cpu? Because so far the unit there is only memory and time. What about cpu? Cpu actually is something that we cannot directly configure. It's something that gets automatically configured proportional to the amount of memory that we allocate. So there is a table here that explained that a little bit better. Basically, if we take an amount of memory that goes in between 128 and something around 3000, you get two virtual cpus. And then the more memory you give, the more virtual cpus, you get up to six virtual cpus. So this is actually very important. We will talk a little bit more about this later because you can see how you don't directly control cpu, but you control memory. So sometimes if you want more cpu, you just need to increase memory, but you cannot change the two dimensions independently. Now I'm going to be showing you a very quick example of what it looks like to write a lambda in Node JS, just to understand what is the kind of interface that you have to deal with. Now, when you write a lambda, you generally call that a handler function, which is basically the function that gets invoked, the function that represents your business logic that needs to respond to an event. So this function is basically just a function, in this case in node Js, that accepts can argument called event, which is effectively an object that describes the event that triggered the execution. There is also a context object which gives you more information about the lambdas, how much memory is available, how long this lambdas has been running, and it's something that you can use if you need to read more about, know more about the context of execution. Now generally inside this kind of function you will do a number of different things, but most likely you will want to fetch some data from the event and use it as input to your execution. Then you will have some kind of business logic. And finally, when you complete your business logic, you will have some kind of result that you might want to return to the caller of this lambdas function. And if we want to see a more specific example, let's say that in this particular event we have a field called URL. So we can take this field and do a fetch request to this URL. So we actually do an HTTP request. We get the response of that HTTP request, and then the result of our lambda function is effectively the status code. So in a way we might have implemented here a very simple elt check that can be used to see what's the status of an HTTP endpoint. Now, what kind of languages are supported? I showed you an example of node JS, and that's one of the runtimes that are supported. But there are a lot more that are supported. There is Python, there is Java net, go Ruby, and you can even write your own custom ones. So if you want to write a runtime for Lisp or for Erlang or for elixir, you can definitely do that. It takes a little bit of effort, but you can do that. But you might have noticed here that there is no rust there. So what's going on with rust? I'm here to talk about writing lambdas in rust, and there is no rust runtime. Now of course you can get rust, and that happens by doing a custom runtime written in rust. And here there is a topic that we might discuss, because when you use node js or Python or Java, you can select specific versions of node js, Python or Java. But basically AWS is responsible for patching security and making sure that you are always using a runtime that is secure enough for that particular version. When you write your own custom runtimes, that's something that doesn't really happen. You are more in charge of the whole runtime. So there is an argument there where I wish that AWS will provide a better way for you to write Rust lambdas. But as of today, actually you are not alone. AWS gives you a lot of tooling, and actually there is a very good libraries there that is called AWS lambda rust runtime. And this is basically how you create a custom runtime, totally written in rust. And then in this runtime you can also embed your own functions. So in a way you are creating together a runtime that also contains your function code and that can have different pros and cons. We already discussed the con, which is basically you don't really get a lot of security updates. It's on you to recompile the entire runtime and ship a new version of that lambdas. But as a consequence of that you also get better performance, because in one process you have the runtime and you have your lambda code, so there is less message passing. And also rust is the team that is building this rust runtime is also building a lot of tooling, like middlewares or the ability to run generic services AWS lambdas, you can even embed web frameworks into your own lambdas. So just because you have more direct control, and your code and the runtime are more tied together. You can do a lot more manipulation of the environment as opposed to what you can do with other languages. So let's see how using this AWS lambda restaurant time looks like. So you have to write song code that looks like this. So this is kind of an lo word, lambdas, and there are actually two main parts here. The first part is this function here, where this basically is our generic lambda Android. We get an event, we can do something with this event, and eventually we return a response. Then there is another part here, which is the main function. And inside the main function we have this lambda runtime run. And this is basically the part that is using the lambda runtime library that we saw before from the GitHub repository to initialize a lambdas runtime that is able to call your specific lambdas function code. So you can see here today how in the same binary, at the end of the day, we will have the runtime, but also the lambda code. And of course, this comes with a little bit of extra boilerplate, because we need to make sure we initialize with this main function, the code that is able to run the runtime and use our lambda function as the only function available in that runtime. Okay, now the next question is, why rust, though? Why is rust so interesting in the context of AWS Lambda and serverless in a more general sense? So let's talk a little bit more about rust as a language. It is a relatively new language. The first version was released in 2015. It is a compiled language, so that kind of puts it in the same bucket as go or C or C Plus plus, where you can not just interpret scripts, but you actually have to compile your code before you can execute it. It is a language that is focused on performance and memory safety. These are the two main qualities that people will refer to when they think about rust. But it's also a language that interesting enough, and this is a little bit as opposed to C or C Plus. Plus is trying to give very good high level constructs. For instance, you have things like iterators that are built in, in the language, and generally you will have a lot of high level functions, while at the same time being a compiled language and a system programming language. When you want to do very low level stuff, you can kind of drill down and get to the level where you can do very low level things. So in a way, it is a great general purpose language because you can write high level software. For instance, a web server but you can also write operative systems with it. So you get a very good degree of possibilities by just adopting rust as a programming language. And another thing that I really like is that there is a very modern tool chain. For instance, we'll be talking more about cargo, which is a tool that allows you to install dependencies, and this is built in in the language, which is something that doesn't happen so often with lower level programming languages or system programming languages. And in general, there is a great ecosystem. I was surprised for a language that is still relatively new to try to code a bunch of different applications. And every time I needed a library, I was able to find even more than one library. And every one of those libraries generally has very good documentation, good examples, good testing, and is generally well maintained. So I have to say that I'm quite impressed by the level of maturity of the tool can and the ecosystem as a whole by just considering that the language is still relatively new and not so popularly adopted by many, many companies. And the final point, last but not least, rust as can. Awesome mascot, and I'm glad to have it here as well. Okay, moving on, let's talk about cargo. I mentioned already that cargo is a built in package manager. So in a way it's like NPM, but for Rust, you can use it to, say cargo, add a specific library, and it's going to make sure to download and make the library available into your project. But it also does a lot more than that, because it has a lot of subcommands that can be used for scaffolding a new project, or a library for running tests, for running benchmarks. And it is also something that can be extended with third party commands. For instance, there are third party commands that allows you to do snapshot testing, to do fuzzy testing, to do all sorts of different things that you might need for specific projects. Now, the next point is, why do we care about rust in the context of lambda? We already saw that it's not as easy as with other languages to do lambdas in rust. So why should we go through that trouble? Is it really worth it? What are the benefits? And the main benefit is that if we combine the performance characteristics of rust with also the ability of being a language that allows you to be very efficient memory wise, those are pretty much the two dimensions that basically on which we calculate cost for execution of lambda functions. So if we can be very efficient with performance, and we can also be very efficient in terms of memory, we can probably save a lot of money, as opposed to running the same kind of application in lambda with different languages that might not be AWS performance and might consume a lot more memory to run. The other thing is that it is a language that focuses a lot on multi thread safety, so we could be able to write very good multi threaded versions of the code that we have in a lambda written in another language, which is something that can allow us to be even more performant and therefore save even more cost. And finally, and this is more of a personal take, there are no null types and there is a great system to deal with errors, which I think naturally once you start to learn how to use rust as a language and appreciate the concepts and the libraries that rust gives you to deal with the eventuality that you might have a value or not have it, or the way of dealing with errors, I think you will end up writing code that is generally more well structured, well tested, and it covers for more edge cases. So you might end up with fewer bugs than what you would have with other languages. And one last point is that I have seen, and this is something that I don't have strong evidence, but just by doing some measurements with my own lambdas, that you generally get very good cold started times and your lambdas will run very quickly. So again, this is just another opinion point that reinforces the fact that with rust you might be saving a lot of money, especially if you have lambdas that gets executed thousands of times or hundreds of thousands of times per day. It might be really worth considering to rewrite that particular lambda in Rust. Now let's start to look at the tooling. We have this tool called cargo lambdas, which is relatively new and it's built by AWS or somebody at AWS. And basically what it does is a third party command for cargo. So extends the set of tools that are available by default in cargo, just to make it easier for you to auto test and deploy lambdas in Rust. And one of the most interesting features is that it can cross compile for Linux. IRM, which is lambdas will run in Linux, but you have a choice between x 86 or IRM, and generally going for IRM it is cheaper and it might be even faster. So again, another reason to pick that one if you are caring about cost. But it is always a little bit annoying to compile for Linux IRM if you work with other systems or with other processors, and with cargo lambdas, you get this built in cross compilation. So you can just use Windows, Mac or Linux in whatever architecture, and you should be able to compile without problems. For Linux IRM and so far I've been using it on Mac and I didn't have any problem of compilation, so definitely works well on Mac. Now, what are the main commands that cargo lambda gives you? It gives you cargo lambda new, which is something you can use to scaffold a new project. Then it gives you cargo lambdas watch, which is a command that you can run to keep watching your code for new changes and keep a development environment hot. So basically creates kind of a lambda emulator which allows you to test your lambda, but that emulator is automatically restarted every time you do a code change. So you don't need to worry about stopping, recompiling and rerunning the lambda every time you do a change. And there is another command called cargo lambda invoke which basically allows you to simulate an event coming into the lambda and triggering it so you can see what is the effect of a specific event and test your code that way. And finally you have cargo lambda build to build the lambda for production and cargo lambda deploy to release it to AWS. Now I want to show you a very quick demo of cargo lambda. So I have created the environment, so what we can do is cargolanda new conf 42. I actually spelled conf 43, but I think we'll go with that. So is this function an HTTP function I'm going to go for no, now, because I want to showcase you that you can pick different kind of events and the runtime is actually giving you types for all the different kind of events that you have in AWS. So this tool is basically able to scaffold your code with the event type that you need already selected for you. So for the sake of this example, I want to go with sqs just because I think this is a common enough use case, and one of the use cases where you might benefit the most by writing your lambda in rust. And it did something we don't really know yet. So we just know that here a new folder was created called 43. Sorry typo, but whatever and what we do, we can just open visual studio code and see what kind of code we have inside there. So let me make this a little bit bigger. So basically what it did, it created a cargo terminal, which is like the packet JSon of Rust. And this already includes all the dependencies that we might need to write a lambdas. So it contains lambda events, which is a library that gives us all the types of all the different kind of events. It contains the lambdas runtime and then it contains Tokyo, which is something needed for the lambdas runtime. This is just an async runtime for rust and some tracing utilities which are very convenient for logging and tracing. Now if we look at the code that was created for us, you can see that we have all the boilerplate already done, and this boilerplate is actually doing some initialization of the tracing stack for us. So this is basically initializing a logger that we can use, and then it's creating our skeleton for the lambda function itself. So the lambdas handler with the type of event that we selected. So let's write some simple code just to make something useful. Let's remember that this is a lambda that is going to be invoked when SQS events happen. So you generally have a queue, you're going to be submitting a bunch of jobs to this queue, and then you probably have an integration where new jobs appearing in this queue will trigger this lambda, which is then responsible for processing those kind of jobs. So the first thing that you generally do is you want to separate the event from the context. And basically the way you do that is by doing calling this event into parts. And you can see here that now we have direct access to the SQS event and the context. We are not going to need the context. So I'm just going to use an underscore here to avoid warnings. And now what we might want to do is in sqs you might be having multiple records per event. So what we can do is basically say for record in records, and I think I like what copilot is giving me, so I'm going to go with this. So for every record in the list of records that are available in this event, we want to write some tracing information. And basically tracing what allows us to do is allows us to specify a bunch of values that will be logged as structured logs. So here we can define the new event is new or something like job started maybe. And we can say record message id and we don't need to say a string, I believe. And then is it message id? Actually I think we have message id. And then yeah, we can read also the body. So we can say body is equal to record body. Now this is an option string. So we might want to do unwrap just to see the clean value. And this basically means that this is one of the utilities that I mentioned before. So because the message id might be there or might not be there, this cannot be null basically. So we are forced to deal with this option, which basically means this value can exist or not. And rust is forcing us to deal with the fact that we need to tell rust what to do when the message is not there. So what a wrap or default does is basically saying if there is a value, take it as job id. Otherwise use the default value for strings, which is just going to be can empty string. So what we are doing here, we're just printing a log line for every single job. And this log line contains the job id and the content of that job. So the body of that message in sqs. Now we can also return something here. This is just the unit type, which means we don't really have nothing to return. It's just an empty table. But let's just say that we want to return a string for now. And what we can return is we could say let message count equal event records length. And here what we can do is say this is the number of messages processed and we can just return this. Now this function is probably good enough for us to see if something works. So how do we test it? We can test it locally and we can say cargo lambda watch. And this is the one that starts the development server. So what we can do now is to say cargo lambdas invoke, but we will need some message. So the first thing that we do is we just invoke it with an empty message. And this is probably going to fail because we are going to be trying to cast this particular message to an SQS type of event. So let's see what happens. And actually it didn't fail. Actually no. It is compiling first. Let's see if it fails or not. It's okay. It did fail because it wasn't able to deserialize the message into the specific type that we were expecting. So how do we do this? I have prepared here a piece of JSON that represents an SQS event. So what we can do is PB paste into event JSON. So at this point what we can do is cargo lambda invoked data file and here we can specify event JSOn. What did I do wrong? Okay, I probably have very bad json there, so let me do it. Let's have a quick look at this event here. So we have an event Json. Oh yeah, I have bogus quotes because I'm copy pasting this from the web and it didn't like it. Okay, is this a good Json now? Looks like it is. So let's try again. And we want to run cargo lambda invoke data file event JSon. Okay, it's saying two message processed. So you can see here, this is the response that we received from the lambdas. But if we look at the emulator here, we can see that the lambda did log indeed two lines. Two event name, job started, event name, job started, job id. And then the body is test message one and test message two. So this kind of works. And you can see how, interestingly enough, the tracing utility also gives us a request id. It also integrates with x ray and gives us tracing. So this is actually a really powerful default there, which is something that you might want to have for production applications. Now the next thing that we want to do is cargo lambda build release. This is going to build a release version. So a more optimized version and a more stripped down version of our lambdas code which is going to be ready for deployment. So once this finishes to build, what we can do is cargo lambda deploy, I believe. I guess we're going to figure it out and then we are going to see that we can run the same function on AWS itself. So cargo lambdas, let's see, help we have cargo lambdas deploy. Now what this is doing is actually zipping our lambda code and calling the specific APIs in AWS that will make this lambda available in my account. And it's also doing a bunch of stuff because lambda will also need role, will lead log groups. So it's doing all of these things for us. So this is the arm, so the unique identifier of our new lambda. So if we go to my lambda account, which hopefully you can read if we refresh, we should see that we have a new function called comfort tree. While this loads, I'm going to copy that json here and we can invoke this lambda manually with this particular json. So if we go here in test, we can input this example JSon. And if we test the lambda was invoked in my account and we should see the same result. Two message processes is the output of the lambdas. While here we can see all the logs and we can see that we have the two logs in a very similar way as what we tested locally. Now back to the slides we saw so far that we were able to create a lambdas and ship it. So what's next? You probably just don't want to deal only with lambdas. When you're creating a project you will have a lot more infrastructure. So there is a specific tool that you can use that is called Sam serverless application model that can help you there. And sum is basically a yaml based infrastructure as code tool that allows you to build serverless applications in an easier way. And it's great again when you have to go beyond just one lambda, just to give you some examples. For instance, you need to provision multiple lambdas, not just one lambdas with it, but you might have a complex project that requires multiple lambdas. And for instance, you might also need to provision other pieces of infrastructure like s, three buckets, permissions, dynamodB tables and so on. And sum basically supports everything that is natively supported by cloudformation, which is the default tool in AWS to do infrastructure as code. But it gives you a slightly simpler abstraction, so it gives you slightly more concise code. And there are a lot of shortcuts that you can use, and some will take care of generating the equivalent cloud formation code. So the deployments actually happen through cloudformation. So some create cloudformation code and then that cloudformation code is deployed. So you get all the same benefits that you get with cloudformation. For instance, rollbacks, change sets and so on. And if you want to see how a template looks like, it's pretty much like a cloudformation template with just a few differences. For instance, you have that transform AWS serverless, which is basically telling AWS that when this template is trying to be deployed, it needs to be transformed to basically apply all the shortcuts that we have with some and convert it to a complete cloud formation template. And then here what we see is that we have defined a function and we also define an s strip bucket, and then we are referencing that strip bucket to trigger that lambda function. Every time that there is a new file on that bucket, and in this case sam or cloudformation, they are going to understand that there is a dependency between that bucket and the lambda trigger event. So the bucket is going to be created before the lambda trigger event is created, while the lambdas itself and the bucket can be created in parallel. So that's the beauty of infrastructure AWS code, that you don't have to think exactly about the order of things or what are the dependencies. You can just express those dependencies in a declarative way, and then the tool is going to take care of deploying the new infrastructure or future changes that you might be doing to this infrastructure, and just make sure you end up with the state that you are describing in your template. Okay, then why do we like AWS sum? So AWS sum is another tool by AWS, and no surprises. It works with cargo lambda. It is still experimental. Cargo lambda is still very new. The rust runtime is still relatively new. Sam has been around for a few years, but is always adding more and more features. And it only makes sense that Sam is trying to support the rust ecosystem. So as of today, if you use the latest version of Sam and you enable the experimental features, you are able to use sum with cargo lambda. And the advantage is that you can use infrastructure as code with the full power of sum, but you can build and run your rust lambdas with cargo lambdas, which can give you all that nice emulator, and it can also build and cross compile lambdas for Linux and IRM. Now another note is this is something I haven't tried yet, but it looks like cargo lambda also works with CDK. So if you prefer to use CDK to define infrastructure as code, you should be able to use CDK together with cargo lambda. Check out that repository if you want to try it out. And if you're curious to see a more complete example that involves Sam and cargo lambda, I recently built this particular application, and this application basically allows me to see if there are earthquakes close to my family. I am originally from Sicily, close to the Mount Aetna, and it's an area where there are often earthquakes and they might be even violent ones. So with this tool I can basically be notified as soon as a decent enough earthquake happens, and I can immediately reach out to my family to see if they are okay. So the way that I built this tool is basically a lambda that is triggered every hour. This lambda will call a specific API that gives me information about earthquakes in Italy and in other areas around Italy. And then this lambdas has a number of filters that basically will make sure that if there are recent earthquakes, they are happening close enough to the area of interest that I care about. And the magnitude of that earthquake goes above a certain threshold. If all of that happens, there is an event that is created on Eventbridge, and there is a rule in eventbridge that logs this particular event, but also sends the message to an SNS topic. So I can use an email subscription to also receive an email. Now again, this is just an example and it's working and quite complete. So you can check out the repository and see how I implemented it. Of course, feel free to submit prs or take this and change it and deploy it to your own account. And now I have another couple of points before we finish this talk, because with lambda and Rust, we are trying to maybe put a little bit more effort into learning a new language and writing code that is more optimized. What else can we do to actually optimize that lambda and make sure that we are exploiting all the potentials that we can actually exploit. And there is a very interesting thing that we need to think about, because remember when I told you that you don't pick the cpu, you just pick the memory, and the cpu grows proportional to the amount of memory. So one idea that you might have is that if I get more cpu, maybe I can be significantly faster. That even if I am paying more because I'm allocating more memory, I am so much faster that it still comes off as cheaper to go with more memory and therefore more cpu, rather than just trying to save everything on memory. And then maybe it takes a long time to process your lambda. So basically the question is, where do we find the sweet spot? And it is actually not a non trivial problem. It's not something you can kind of define in advance. Generally, the best way of finding the sweet spot is by trying different configurations for your lambda and actually seeing the numbers and then deciding, looking at the numbers, what is the best configuration. And there is a really good tool called AWS lambda power tuning that you can use, and it's basically a step function that can trigger your lambdas using a bunch of different configurations in terms of memory. And then it collects a bunch of data and it will show you graphs that can help you to decide which configuration is the best. For instance, in this graph you can see that this lambda was executed with different combinations of memory, 128, 156 and so on. And we can see here at about 1024 that this is probably the sweet spot, because the execution time doesn't change the execution time. There is the red line, so it goes down dramatically while the execution cost, which is the blue line, is starting to pick up. So basically we are seeing that if we compare the same execution cost, we can reduce the amount of time. So this is probably the sweet spot there, and you can check out the repository. There are more examples that tell you also how to interpret this particular diagram. And in any way you can test different functions and figure out which configuration perform best for your particular function. Now, I want to answer a couple of questions that you might have at this point, just to finish this talk. How easy it is to learn rust if you don't know rust today, is it worth going through the trouble of learning rust just to try to optimize a few lambdas? Well, I'm not going to say that learning rust is easy. I think there is a bit of a learning curve there that you definitely need to go through, and it might take a little bit of effort, especially if you're coming from languages such as Javascript or Python that are higher level interpreted programming languages, where you don't have to worry too much about memory or memory safety because the language will take care of all that stuff for you. In rust, you need to learn a bunch of new concepts that are actually things that you generally don't have to worry about when you program in higher level interpreted programming languages. So definitely it's not easy, but is it worth it? My personal answer is yes. I've personally been through this journey of learning rust for the last probably three years or three years and a half. Of course, I've done that in my own spare time. So probably if you invest more of your own time into learning it, you can be proficient in much less time. Also today there are a lot of very good resources to get started, and therefore you might be able to become proficient with rust much quicker than I did. And I would say that it's definitely worth it, because now I think coming from higher level programming languages, I understand a lot more of how programming languages in general work, how memory works, how I can optimize things better, and I think this is also affecting the way I write code in Python or node JS. So I think learning rust on its own, it's a good investment, and I think we'll be seeing rust more and more in the future. So it can be also a good investment for your career in general. So as a summary before we wrap this up, I think serverless is just a great way of building applications. Rust is also a great language and a great ecosystem to build applications. So if you combine the two, I think we have a powerful combination there, and you can use sum and cargo lambda together to have a great developer experience. Sum gives you a lot of control on the infrastructure side, while cargo lambda gives you a lot of control on the rust compilation, testing and execution. So combining the two together, you can deploy infrastructure that leverages rust when it comes to compute. And finally, learning rust is not necessarily easy, but in my opinion, it's definitely worth it. So try it a little bit, spend a little bit of time, find people to learn it with, and I think it's going to become a fun journey that eventually is going to give you a lot of satisfaction and a lot of good opportunities. So that's all I have for today. Thank you very much for listening to this talk. If you want to grab the slides, there is the link again, and if you want to check out my book and give me some feedback, please do. There is a link there. Reach out to me and let me know what you think. Thank you very much.
...

Luciano Mammino

Senior Architect @ fourTheorem

Luciano Mammino's LinkedIn account Luciano Mammino's twitter account



Awesome tech events for

Priority access to all content

Video hallway track

Community chat

Exclusive promotions and giveaways