If you’re writing software in 2021, you have undoubtedly heard the term “serverless function” before. You undoubtedly know the taglines as well. Promises of pay-as-you-go pricing and elastic scalability should come to mind immediately, and possibly dreams of cloud-native integrations with other services on the same cloud platform.

In this article, I want to demystify and possibly debunk the three promises above, giving you a practical framework to evaluate the costs and benefits of using serverless functions as opposed to other tools. I’ll use a real-world example of a Microsoft Azure serverless function implementation in Javascript to demonstrate how these abstract concepts translate into code. Before we get into the details though, I want to take a look at the term “serverless” and explain how functions relate to it.

What Is a Serverless Function?

The term “serverless” doesn’t mean that your software isn’t executed by a server, as the name suggests. Rather, it denotes that it’s abstracted very far from the underlying server infrastructure, especially as it pertains to scaling and billing. Many of the troubleshooting and deployment tactics that you may need to employ as you build functions involve interacting with that underlying server via a command-line interface. As such, you should understand that when you write code that will be executed by a function runtime, it is ultimately being run by a cloud server, and that server is very real. 

More in Software EngineeringTest-Driven Development Is Still Alive and Well

 

What Does Serverless Really Mean?

The term “serverless” doesn’t mean that your software isn’t executed by a server, as the name suggests. Rather, it denotes that it’s abstracted very far from the underlying server infrastructure, especially as it pertains to scaling and billing. 

Many of the troubleshooting and deployment tactics that you may need to employ as you build functions involve interacting with that underlying server via a command-line interface. As such, you should understand that when you write code that will be executed by a function runtime, it is ultimately being run by a cloud server, and that server is very real. 

 

What Are Functions?

Functions are blocks of code defined within a single stateless function that respond to an external trigger that executes them. They use various methods to connect and interact with other cloud resources during the process of their execution. Again, functions are executed on a cloud server that runs them using a function-specific runtime. In the case of Azure, it is the functions runtime. 

Functions are one of the smallest units of serverless functionality. They’re not the only serverless cloud resource, but rather one in a long list of tools that follow this underlying architecture.

 

Our Experimental Setup

To evaluate the promises of serverless, we will write a simple function in Microsoft Azure and evaluate whether or not Azure serverless functions can deliver on the three serverless promises.

  1. Pay-per-use: Does the Azure function appear to be more cost effective than a dedicated server resource, and at what point would we be better off switching away from the function?

  2. Elastically scalable: Does the Azure function allow for scaling up under high demand as defined by its ability to handle requests per second (RPS)?

  3. Cloud native: Does the Azure function give us the flexibility to interact relatively easily with other Azure resources? 

We’re going to build an API endpoint for a customer that will allow users of the API to retrieve information about a specific product from a cloud database, in this case, Cosmos DB. The users are familiar with JSON and can accept a response in that format. 

The function we write will respond to an HTTP request, connect to a cloud DB, retrieve a document and send that back to the caller as JSON. The Azure function code looks like this:

module.exports = async function (context, req) {

    try {
        context.log('JavaScript HTTP trigger function processed a request.');

    // Reference the found item from the data bindings (the item in the DB)
    const message = JSON.Stringify(context.bindings.productFromDb);
    context.done();
        context.res = {
            // status: 200, /* Defaults to 200 */
            body: message,
            contentType: 'application/json'
        };
    } catch(err) {
        context.res = {
            status: 500
        };
    }
}

 

Pay-as-You-Go (Confirmed)

The promise of paying only for what you need is fairly straightforward to investigate. We can take an example of an API built with Node.js on a dedicated server in DigitalOcean (a popular server provider) and compare that to the cost of an Azure function. An Ubuntu server that meets the requirements for a Node.js API is roughly $24 per month.

Looking at Azure functions pricing, we see that the regular consumption plan costs $0.20 per million executions plus $0.000016/GB-s. GB-s is a measure that combines the amount of memory needed as well as the execution time of the function. 

I’ve run our simple API function and determined that it takes 0.1GB and 0.2s (200ms), which means that my GB-s for this function is 0.02. If I were to run my function a million times, therefore, the cost would be $0.32 in execution time (1M*0.02GB-s*0.000016 ) plus the $0.20 charged for a million function executions, for a total of 52 cents. 

We can therefore calculate that, to exceed the digitalocean price of $24 per month, we would need to run this function about 46,000,000 times in a month, or 1,500,000 times per day. Unless the API is under massive request load, therefore, we can see that it would be cheaper to run this API as a serverless function.

 

Elastically Scalable (Partially Debunked)

When it comes to scaling up under high RPS, serverless functions do deliver, in some categories. This YouTube video shows a Microsoft developer pushing his simple function up to 3,000 RPS, and this investigative article shows the consumption plan reaching 1,700 RPS during spikes. Functions do not scale up, however, to very high throughput scenarios or situations where consistently high throughput is necessary.

If we needed to scale up this application to higher than 3,000 RPS, though, Azure functions might not be able to meet our requirements. An abundance of API benchmarks exists out there in the world of Node.js applications that show us the top ranges for these dedicated servers. This benchmark by Fastify gives an estimated maximum of 15,000 RPS on a Node.js/Express application or 70,000 RPS on a Node.js server using Fastify. I haven’t seen Azure functions benchmarks that approach these numbers.

When it comes to consistent performance, Azure functions have also received mixed reviews. In fact, this benchmarking test shows Azure functions actually becoming corrupted under a consistent peak load, even when running on a dedicated app service.

Another hidden component of elastic scaling is warmup time. Because functions in the consumption-plan pricing tier, tend to expect idle periods, they will go into a cold start’ mode after periods of inactivity. The startup time for a cold-starting function may be more than two seconds on Linux, and up to 10 second on Windows. The only way to fully avoid this delay time is to pay for designated resources for your function, which negates the original advantages of building using serverless architecture. Therefore your function is limited both at the high end and the low end of request volume. 

In a low- to medium-load scenario, with some short spikes, there is no question that we can expect our serverless function to be able to handle scaling up easily, and we could expect this promise to be delivered upon in scenarios in which we expect spikes from one to 2,000 RPS. If our demands fall outside that range, though, it might make more sense to look at a dedicated server.

 

Cloud Native (Confirmed)

Most cloud providers clearly deliver on the cloud native promise. Functions are able to connect to a myriad of resources during their execution, creating events that trigger push notifications and emails, pushing and pulling data from cloud databases, or triggering other functions.

The method that Azure cloud resources use to connect to one another is called data Bindings, which is a highly abstracted syntax that creates the connective tissue holding Azure resources together.

The Data Binding is a JSON object that defines the ingoing and outgoing connections that the function needs to be able to have access to. The data bindings used in our example code look like this - 20 lines of code that give us access to the specific item from the database requested in an incoming HTTP request.

{
  "bindings": [
    {
      "authLevel": "anonymous",
      "name": "req",
      "type": "httpTrigger",
      "direction": "in",
      "methods": [
        "get",
        "post"
      ],
      "route":"product/{partitionKeyValue}/{id}"
    },
    {
      "name": "$return",
      "type": "http",
      "direction": "out"
    },
    {
      "type": "cosmosDB",
      "name": "productFromDb",
      "databaseName": "Products",
      "collectionName": "Items",
      "connectionStringSetting": "CosmosDBConnection",
      "direction": "in",
      "Id": "{id}",
      "PartitionKey": "{partitionKeyValue}"
    }
  ],
  "disabled": false
}

An almost limitless number of bindings and connections are available for Azure functions, so just about any connection that you can imagine is possible.

 

A Few More Things to Consider

Before closing out the article, I thought I would mention a few other features of serverless functions that could be relevant to your decision about whether or not to use functions in your next application:

  1. Language-agnostic: Functions allow you to write in the language that you are the most comfortable with and still maintain the ability to interact with other application components. This feature is thanks to the very well-defined protocols that all cloud providers have established for transferring data through cloud workloads.

  2. Inherent security: Using Azure functions, I can connect to a CosmosDB database in a single line of code. That connection will be secure, and I won’t need to write any code that references the connection string to my database. This is a major advantage.

  3. Inhibited Workflow: Working with functions can be frustrating because the DevOps workflow around them is not as developed as it is for other software stacks. This means you will spend more time trying to deploy your application and troubleshooting in log files than you might if you had gone with a more traditional software stack. 

Work SmarterIncrease the Readability of Your Python Script With 1 Simple Tool

 

Experiment With Serverless

Serverless is receiving boundless praise in the development community. Our experiments here show that, under low to medium usage with high amounts of variance, serverless functions can be a great option. We also uncovered a performance caveat (cold starts) that may impact the value of functions in some scenarios.

Serverless functions are best suited for workloads that are asynchronous, infrequent, in sporadic demand and highly integrated with other cloud infrastructure. Although this new model gives us a lot more flexibility when it comes to problem solving in the cloud, it is a tool bound by constraints and isn’t a solution for every problem.

I hope that this article helps you make realistic decisions around when and how to implement serverless functions in your applications!

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us