When is serverless more expensive than containers? , by Alan Helton | November, 2022

A certain amount of traffic suggests scales to make serverless a more expensive bill at the end of the month

Image by user18526052 on freepik

I talk to a lot of people about serverless. Shocking, I know.

When discussing the viability of serverless for production applications, I’m often given the same two arguments:

, cold start is so bad that we can’t use serverless” And “Aren’t you worried about the massive cost? ,

We have already covered why we should stop talking about cold start. But the question about cost at large is one we haven’t covered yet.

It’s not even a black and white argument.

Serverless applications offer significant total cost of ownership (TCO) advantages compared to containers. You don’t need to spend time (which is money) on server maintenance, installing patches, rebooting services in invalid state, managing load balancers etc….

Source

But for the sake of argument, we can compare the total bill and keep other contributing factors aside. When you haven’t sold your audience on the concept yet, it can be hard to prove them objectively.

That’s why it’s our intention to strictly compare dollars and cents. A difficult concept to prove again. It’s difficult but impossible to compare a provisioned service like EC2 with a pay-as-you-use service like Lambda.

I’ve often heard that serverless is expensive to “scale up”, but no one can effectively tell how much traffic there really is.

I had a great discussion with Jeremy Daly And yan kui on this topic a few weeks ago. We all concluded that some workloads are significantly more expensive to run serverless, but the tipping point is usually difficult to objectify. The best thing we can do is to analyze our spending on a heuristic-based approach.

Let’s dive into some numbers.

Disclaimer – I am aware that actual implementations vary greatly, and what I calculate will not match everyone’s use cases. This is meant to be a generalized example to show the relative amount of scale where a serverless implementation results in a higher AWS bill.

The two applications we will compare are a load-balanced EC2 fleet optimized specifically for computation against serverless apps supported by lambda functions.

We will ignore exclusion data charges and CloudWatch charges as both will be present in the same app. The architecture we are comparing is as follows:

Architecture diagram of the calculations we are comparing

To emulate a production application, a fleet of EC2 instances will be deployed across multiple availability zones and routed through an application load balancer. These examples will be general-purpose and optimized for microservices use, so I opted for the M6g instance.

The cost of an M6g.xlarge instance is $0.154 per hour on-demand. In our scenario, let’s imagine we have two instances of a moving average in each of our two AZs 24/7. The Application Load Balancer is $.028125 per hour, plus LCU. Assuming that 50 connections per second last .2 seconds and process 10 KB of data each, our running cost is approximately:

($0.154 EC2 per hour x 24 hours x 30 days x 4 instances + (($0.028125 ALB per hour + $0.01 x 3.3 LCU) x 24 hours x 30 days)= $487.53

So $487.53 per month for a load-balanced fleet of four general-purpose examples. Remember, this is for calculation only. We haven’t looked at things like EBS volume, data transfer, or caching.

When pricing ALB, we assumed there would be 50 connections per second lasting .2 seconds each, so we’ll consider that number when calculating lambda costs.

The cost of lambda is per GB/s and invocation. For our calculations, we will assume that our functions are configured to use 1024MB of memory.

50 Requests Per Second (RPS) totals to 129.6M requests per month, which we’ll use below.

$.00000000133 x 200ms x 129.6M invocations + (129.6M/1M x $.2) = $370.65

So at 50 requests per second, Serverless adds a smaller bill than our EC2 fleet but using these calculations, we can easily calculate the point where the bill flips to EC2, if the optimization If not done then it becomes the “less expensive” option.

Using this little calculator, we can see that once you reach an average of 66 requests per second, serverless becomes more expensive.

AWS App Runner is a relatively new service that is generally available in May of 2021. It is a managed service that automatically builds, deploys, load balances and scales containerized web apps and APIs.

Pricing is a bit simpler and less variable with App Runner than with EC2. You pay for the compute and memory resources your application consumes, similar to lambdas. You pay $.064 per vCPU-hour and $.007 per GB-hour.

To match the example app we were pricing in above, we’ll configure our containers to run on two vCPUs and 4GB of memory.

Each container in the App Runner can handle up to 80 concurrent requests per second. Our example app serves 50 requests a second so we can only estimate the cost on one app runner container.

($0.064 x 2 vCPUs) + ($0.007 x 4 GB memory) x 24 hours x 30 days x 1 container instance = $112.32

From our example above, we already know that lambdas are more expensive to operate at $370.65. So, in this case, App Runner is less expensive to operate. Using our calculator from earlier, we can determine that applications that run 15 requests per second or less will be cheaper to run on serverless with this configuration.

The most important thing to remember is TCO. There’s a lot more to the cost than just the number on the bottom of your monthly bill. Ongoing maintenance, slow development times, complex networking, etc… all play a part in how much it actually costs to run an application.

The above examples are simple and vague. They are not intended to be fully detailed examples of a real production application. It would be ridiculously complicated and hard to follow in one article. The point of the post is to show you that there is a point where computation becomes more expensive when serverless. But sometimes, it’s okay!

Many applications will never be able to see the amount of traffic required for the shift. In our EC2 example, we had to exceed 170.2 million requests per month to reach the critical juncture. While this is certainly a number attainable by some, it may not be realistic for many. Early stage startups will significantly reduce costs by starting serverless and switching to App Runner once they reach a scale that makes sense.

Speaking of App Runner, we saw that it costs about 25% of EC2 spend to support the same application. If you want to stick with containers (and there’s nothing wrong with that, I promise), consider App Runner instead of diving into the complexities of EC2.

If you want to do the calculations yourself, I encourage you to try out the calculator I made. Enter your current provisioned service expense, adjust the configuration for Lambda, then run the script. This will tell you what turns out to be a serverless go-to.

There’s a whole lot of science to using to accurately estimate costs and compare them to how you’re charged by AWS. Its purpose is to give you a rough idea of ​​what “to scale” means when people tell you that serverless is more expensive to scale. There are several other important factors to consider when performing a full cost analysis.

Serverless costs are linear with usage. The more you use it, the more expensive it doesn’t get. Often, it turns out to be really cheap! Running into a situation where serverless becomes too expensive sounds like a good problem. This means that your app is gaining popularity and a new set of challenges are in play.

Happy coding!

Leave a Comment