Cold starts exist in the serverless domain just like car crashes exist in the automobile industry. As worrisome as these accidents might be, they can be avoided with a little defensive “driving.”

The global serverless market size was valued at $3.1 million in 2017 and is projected to reach nearly $22 million by 2025, which would represent a CAGR of 27.9 percent from 2018 to 2025, according to a Research and Markets report that forecasts where serverless architecture is headed by 2025.

These trends show the rising popularity of going serverless: With the advancement in software development, the question is no longer centered around whether or not to move to the cloud but how to do so. And it seems a serverless approach is the solution because it solves the challenges of developing in the cloud seamlessly.

Serverless Architecture Solutions Come in the Form of:

  • Fully managed: You no longer have to worry about building and maintaining any underlying architecture and can delegate all responsibilities to the cloud vendor.
  • Scalable: Gone are the days when we had to forecast resource usage to ensure we always had the right amount of resources to host our applications in an ever-changing environment of customer demands.
  • Pay-as-you-go: Only pay for those resources you’re using — in comparison to other cloud resources such as the IaaS compute service AWS EC2, where you would be paying for the instance even when there is no traffic.

As with other cloud solutions, disadvantages juxtapose major advantages. One specific drawback is the cold start, which can cause performance problems and result in a high barrier to adoption.

When you run the serverless function after a certain period of idleness, you incur latency. This latency for the customer delays request processing, results in long loading periods and in general creates a failure to serve time-sensitive applications.

The issue may seem like a dealbreaker. However, there are solutions to overcome the cold start barrier. In this article, we’ll aim to understand the core root of the cold start problem in AWS Lambda functions, its effect across serverless applications and how can these effects be mitigated by a plethora of best practices and engineering solutions.

 

Cold Starts as Inherent Problems

Cold starts can be defined as the set-up time required to get a serverless application’s environment up and running when it is invoked for the first time within a defined period. Cold starts are somewhat of an inherent problem with the serverless model.

Serverless applications run on ephemeral containers called worker nodes, where the management of these nodes becomes the responsibility of platform providers. That is where the wonderful features of auto-scalability and pay-as-you-go approaches arise since vendors such as AWS can manage the resources to match exactly the requirements of the application you’re running.

The problem here, though, is that there is latency in getting these worker nodes into the form of ephemeral containers up for your first invocation. After all, the serverless principle is that you utilize resources when required. When not required, however, those resources theoretically do not exist — or at least you don’t have to worry about them.

 

Remedies for the Cold Start

Cold starts are undoubtedly problematic for users of serverless functions. Nevertheless, there are several solutions and best practices that you can employ to mitigate these pains and overcome the hurdles created by cold starts.

 

1. Avoid Enormous Functions

Larger serverless functions mean greater cold start durations and more set-up required by the vendor. Serverless drives the motive to break down large monolithic beasts into separate serverless functions. The question is: How granular should you go?

The solution is to employ best practices based on how you structure your architecture and enhance your decisions by continuously monitoring the performance of your serverless applications. Take advantage of serverless monitoring tools to enable you to get the relevant insights.

 

2. Reduce Set-Up Variables

Preparing a serverless container involves setting up static variables and associated components of a static language such as C# and Java. One of the easiest and quickest practices that can be implemented is to ensure there are no unnecessary static variables and classes. The impact this solution has on the cold start duration is dependent on the initial function size and complexity of your serverless architecture.

Do not expect to see any significant gains in performance simply by removing a handful of static variables. Unfortunately, the impact of avoiding a single static variable is insignificant, and the scale at which the solution must be practiced for notable reductions in durations definitely transcends the scope of the average Lambda. Nevertheless, this is the simplest solution and must unequivocally be realized.

 

3. Adjust Allocated Memory

When uploading your code to the AWS Lambda, you can also adjust compute metrics such as how much memory to dedicate to a Lambda. The amount of memory dedicated to a Lambda also results in proportional CPU cycles being dedicated to that Lambda.

The differing effects can be attributed to the fact that setting up the static environment requires CPU work to deal with static properties and weight of memories. Ensuring that the .NET function gets adequate CPU power directly relates to lower cold start durations.

 

4. Keep Functions Warm

This method aims to reduce the cold start frequency, and in an ideal world, it eliminates cold starts completely. It involves periodically sending invocations to the serverless function to simulate perpetual activity.

Deciding the frequency at which to send these warming invocations requires monitoring and fine-tuning. You do not want to bombard your serverless function with too many empty invocations as it can lead to unnecessary costs due to the pay-as-you-go model of AWS Lambda.

 

Provisioned Concurrency

Depending on the number of concurrent worker nodes you have, invocations can be routed to provisioned worker nodes before on-demand worker nodes — thus avoiding cold starts due to the need for initialization. It would be wise to provision a higher number of worker nodes for expected spikes in traffic.

To achieve fully ideal independence from the cold start problem in terms of AWS, Lambda functions may no longer be fully managed as per the ideological definition of a serverless service. Additionally, the value of the pay-as-you go model also diminishes in its shine.

Let’s think back to our car-crash analogy: Cold starts can be an accident waiting to happen, but they don’t have to be. Like the automobile industry’s innovative safety solutions, there is a treasure chest of solutions and best practices to help reduce the risk of catastrophe. After all, the exhilarating speed of man’s progress is not easily stifled by frigid walls — and I believe that all barriers can be overcome.

More on Software EngineeringHow to Solve Kubernetes’ Cost Transparency Problem

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us