All the newest AWS Lambda announcements
In the first week of re:Invent, there were a few interesting announcements for Lambda that we should take a moment to look at. A few of these have the potential to save some of us quite a lot of money, and the others help to build on the service as a whole.
One cool new addition gives some users a new way to interact with AWS that they previously might have been locked out of, based on their architectural setup. And in total we have four new noteworthy additions that we can investigate, so let’s just dive in!
Lambda duration billing — actually being charged for what you use!
I know I’m not opening up with the fireworks here, but check out this headline:
While this might not seem earth-shattering, it actually helps a huge number of people who have been leaving a lot of money on the table because of the billing duration minimum. Let’s say for example that you have a simple data lookup and calculate function that takes 10ms to run.
This function is integral to serverless architectures and is called millions of times per day. Previously, with the 100ms billing duration, you would be charged for 10x the amount of processing time you actually used. Unless you are somehow able to batch these requests and send them out together, which might increase the complexity of the solution, you were just losing value. Losing value really hurts the soul.
Now, by having the granularity to bill by the millisecond, the company in our example can save a huge amount on its Lambda costs!
Overall this is a great update to Lambda that I’m surprised didn’t come way sooner. And I think it’s pretty safe to say that most workloads will take more than 1ms to run, so I wouldn’t expect this billing duration to drop any lower.
Amazon CloudWatch Lambda Insights (I was blind but now I can finally see!)
I think one of the most painful parts about using Lambda is learning how to optimize and make things more efficient.
Having the billing granularity lowered is a real boon to many people, but if you are not currently using the service the right way, you are leaving even more money on the table.
Creating and maintaining efficient code is the best way to save money and utilize Lambda to its fullest. With that in mind, the ability to see and understand how your functions are performing was dreadfully lacking.
Now, with Cloudwatch Lambda Insights, we have the ability to keep an eye on our Lambda functions. Insights gives us the visibility to monitor and troubleshoot by giving access to automatically created dashboards that summarize the performance and health of Lambda functions. This could be used to help diagnose memory leaks or changes in performance for your code when trying out new versions.
I can stretch out my legs!
That’s a lot of room for activities. With 10GB of memory to play with, Lambda can actually start doing some real computational problem-solving. With the previous limit of 3GB, there was not a whole lot of headroom for memory-intensive workloads.
If you think about it, the original implementation of Lambda addressed small functions that could burst as needed. With the update, we can now work on larger, more substantial workloads.
This means we now have the ability to do batch processing, ETL jobs, and a number of other media-type workloads. Lambda can even be used for “large file” serverless video rendering and processing.
These are things that, traditionally, I would have wanted to put on an EC2 instance to process. I’m very comfortable with VMs, but managing a fleet of EC2 instances is like herding cattle.
Early in the days of AWS, their training referenced cloud resources as “cattle, not pets.” At the time, the corporate message was that we should not get emotionally attached to our resources. Today, the analogy has evolved, I think, to be more fitting. Now, I’m having to manage the herd, and it is a chore.
Although even with the advantage of having up to 6 vCPUs to play with — which AWS states is “a thread of either an Intel Xeon core or an AMD EPYC core” — I could see some issues with having enough raw compute to deal with these large memory workloads. And we are still constrained by the 50MB package size for building our Lambda solutions. Just stuff to think about, I guess.
For that special person who married containers… and wants to use Lambda.
Alrighty, well this one has a few angles we can take a look at.
The first one I see is that it helps address the issues I stated above about Lambda. This functionality allows you to package up dependencies that are larger than 50MB, up to 10GB! This is a huge increase that gives Lambda some much-needed reach. The challenge is that it does require you to become familiar with container development and learn how to use Docker, but at least there is a path now!
The next avenue is for people who are so fully invested in containers, their upkeep, and their infrastructure, that Lambda was untenable before. This gives that group of customers a way to use the serverless technology in a hilariously server-filled way.
Amazon is providing base images for each supported Lambda runtime (Python, Node.js, Java, .Net, Go, Ruby), and you can add your code and dependencies from there. They are also releasing an open-source Lambda runtime interface emulator that will allow you to run local tests of your container image to check if it will deploy correctly. Neat!
Overall these are some nice improvements to the service that I think many of us will find handy. While most of them are not super groundbreaking, they are going to allow us to start using the service in new ways.
I think the larger memory footprint of 10GB is the most impressive update, as it will allow people to really start moving more complex workloads into the serverless cloud, as it were.
However, a close runner up in my book is the billing duration being lowered to 1ms. I truly think this will save some people a ton of money, or at least provide some amount of headache reduction.
Having greater logging and visibility in Lambda is a welcome update, although it is something I have wanted all along. Beggars can’t be choosers. (I suppose.)
Finally, having the ability to use Lambda within a container is pretty nifty, but personally, it doesn’t grab my attention that much. Maybe it’s because I lack imagination in that area. I’ll keep an eye out and see what people start to do with the technology and come back to let you know.