Tim Maliyil doesn’t characterize himself as an early adopter. But back in 2018 — when he was volunteering to build out a new donation website for his old high school in the Bronx — representatives from Google reached out to offer him a free Chromebook in exchange for a half-hour conversation about switching to serverless computing.
“I was like, ‘I’ll take a half hour for a free Chromebook, no problem,’” Maliyil said.
After the meeting, Maliyil was convinced. He switched providers over to Google and tried out serverless computing for the first time, building his high school’s donation website to run on Google’s App Engine. The serverless platform was a great fit with his needs, and Maliyil switched his two companies, AlertBoot and PodFriends, over to it as well. He still seems surprised he made the switch.
“If you asked me a few years ago, I wouldn’t have touched it,” Maliyil said. “We tend to be final adopters.”
WHAT IS SERVERLESS COMPUTING?
Amazon introduced Lambda in 2014, Microsoft released Azure Functions in 2016, and Google released its App Engine in 2011 and Cloud Functions in 2016. These platforms — and others, including offerings from IBM and Oracle — make up the serverless computing space, and their customers are companies that traditionally had to run code and host websites using machines they owned themselves.
Serverless computing refers to the customer’s ability to run code without being responsible for the infrastructure — that is, the computer servers — the code is running on. Although the term “serverless” encompasses a range of offerings, in most cases it refers to a service like Lambda or Functions, where platforms run a snippet of customer code, not an entire application, and code not currently being used is freed from memory. Customers who purchase serverless services from providers are able to write their code, move it to the serverless platform and let it run with very little maintenance.
Serverless Is Great for Small Development Teams
Gojko Adzic, who was named a 2019 AWS serverless hero and is the author of a book on serverless computing, said that switching to serverless back in 2016 made sense because his company had a small team of developers.
“We would much rather spend time building business functionality rather than worrying about operations,” Adzic said. “If I can rent operations from Amazon or somebody else, I can then spend more time focusing on my own problems.”
Maliyil reached the same conclusion when he was building the donation website for his high school. He still had to pay his engineers for their time, and it made more sense for them to work on code for the website’s functionality than to figure out how to handle the infrastructure. The serverless platform also had the benefit of providing relatively hands-free maintenance, which was ideal for Maliyil’s project.
“It’s proved to be near-zero maintenance for us, outside of feature requests for the product.”
“We wanted to make sure it was a low-maintenance type of situation, so it doesn’t get expensive for me,” he said. “It’s proved to be near-zero maintenance for us, outside of feature requests for the product. But from an infrastructure perspective, it’s zero maintenance for us.”
The serverless platforms also automatically adjust the amount of server resources allocated to an application depending on how much traffic the application is receiving — a feature known as autoscaling. This means that developers don’t have to worry about whether there’s enough infrastructure to handle the volume of requests, and also that they don’t have to worry about fluctuations and spikes in usage.
The autoscaling feature also appealed to Adzic.
“We were looking at using something that would scale on demand, to not have to worry too much about reserving capacity,” Adzic said. “We could not really predict the load, and we didn’t want to spend too much money on it. At the same time, we wanted to make sure that if a million people came, a million people would be served.”
Serverless Computing Isn’t as New as It Seems
Serverless computing is the latest twist in a long history of the decoupling of owning your code and owning your infrastructure, started by methods such as cloud computing and containers. Despite its name, serverless computing still involves servers — it’s just that the servers are owned by companies like Amazon, Microsoft and Google, rather than the companies whose code is running on them.
“The cloud is still physical at the end of the day,” Maliyil said. “All this stuff is sitting in a physical data center, consuming electricity and being maintained and all that, so there’s still a physical aspect to all this.”
Compared with offering virtual machines, offering serverless computing is beneficial for the server providers because it allows them to serve more customers using the same infrastructure. Companies with code on a serverless provider don’t reserve a set amount of space on those servers, so if their code isn’t being actively used by a consumer, the provider can free the code from memory to make room for another paying customer.
“For Google, it means they can squeeze more out of their infrastructure as well, because everyone is sharing resources,” Maliyil said. “With a virtual machine, you’re not going to achieve 100 percent efficiency — it’s impossible — but with serverless you are, essentially. You’re just paying for what you use.”
“For Google, it means they can squeeze more out of their infrastructure as well, because everyone is sharing resources.”
The serverless space is also changing fast. Just a couple years ago, users complained about “cold starts,” the often-noticeable amount of time it took providers to spin up code that hasn’t been in use for a while. But Adzic said that, for Amazon at least, cold starts are no longer an issue.
“Since two years ago, I don’t think that’s a problem anymore,” Adzic said. “They improved response times significantly, and about a year and a half ago they also released a reserve capacity. So if you’re really in a situation where you need very low latency, you can just tell them to keep a certain number of Lambdas warm.”
Adzic said that, in the last couple years, Amazon has also made guarantees for payment credit card compliance and HIPAA compliance for serverless computing, which it had not made before. This makes it possible for customers in regulated industries to make the switch to serverless.
Serverless Architecture Isn’t Right for Everyone
Still, a serverless approach is not without its restrictions. All platforms have a maximum cap on the amount of time a serverless process can run, so it’s not a good choice for certain long-running processes.
“Lambda is, at the moment, a hard stop at 15 minutes for a single task,” Adzic said. “It used to be five minutes, and now it’s 15.”
Maliyil said setting the cap is a way for the platforms to protect themselves against code that could cause problems for everyone on the server.
“If something’s stuck there forever, it’s wasting their money too,” he said. “They’re trying to balance all these compute memory resources for all their customers, so they have to put in these emergency valves to keep bad code from ruining the project for everybody else.... If it’s not bad code and you need to have that flexibility, then the serverless option may not work out for you.”
“Serverless doesn’t mean it’s going to be a one-to-one comparison, feature-wise, to doing it yourself.”
Serverless platforms also offer a finite choice of features and customizations.
“Serverless doesn’t mean it’s going to be a one-to-one comparison, feature-wise, to doing it yourself,” Maliyil said. “Doing it yourself — the sky’s the limit, if you have your own production machine. With serverless, they’re going to impose those restrictions on what features may or may not be available, the time restrictions for the code running, things like that. So you have to see if, on a technical level, it will straight out work for you.”
Fernando Medina Corey, a Pluralsight instructor who has taught several courses on Lambda, told Built In that one of the most important things to consider when deciding whether to go serverless and which provider to use is the languages the platform supports.
“It’s important to pick the ones that will best mesh with the skills and needs of your organization,” Medina Corey wrote. “You need to make sure they support your chosen language runtime and any resources and integrations you need.... These decisions are also very important to make with your team in mind. Do they know Python already? Make sure the tool you want to use has good developer tooling for Python!”
Whether You Save With Serverless Depends on How You Use It
For the right applications, switching to serverless offers a wealth of advantages. One of the main advantages is the price.
“The reason Lambda is really interesting for us is that it is priced per request, not priced based on reserved capacity,” Adzic said. “If nobody came, we wouldn’t pay anything, if lots of people came, we’d pay money.... Essentially, the pricing is very much proportional to usage.”
The payment structure is especially advantageous for use cases where traffic is infrequent or experiences large fluctuations — Maliyil said his high school donation website only costs three cents per month to host. But it doesn’t make sense for every company.
“You have to do the cost-benefit analysis of how much this function is going to charge me,” Maliyil said.
Serverless Allows for More Versioning
Going serverless can also improve the DevOps process.
“The way that you can [have] multiple versions of the application is really interesting,” Adzic said. “Because Lambda is priced per request, if [users] use a single version of the application, or they’re using five different versions of the application, it costs exactly the same.”
Maliyil said having different versions was useful for rolling back mistakes and also for testing. Although there are already tools for version control for code, versioning on the platform makes production errors easier to handle.
“[If] we’ve goofed up — OK, fine, I click a button and the last known good version is up and running, and there’s nothing to it,” he said. “It creates a lower-stress environment, and we can deal with it on the developer level at a more controlled, non-panicked pace.”
It’s also easy to spin up environments for testing, or to do experiments like A/B testing.
“If you want to split the traffic between versions as part of your testing process, you could split the costs [and] evaluate how the traffic behaves and how the users behave in different versions of the product,” Maliyil said.
The Limitations of Serverless Also Offers Advantages
Using a serverless platform can also offer security benefits. Updates for known vulnerabilities are often patched through the platform without customers having to worry about them.
“In 2018, there were these vulnerabilities discovered in the Intel processors,” Adzic said. “I remember waking up one day and overnight, we got an email from a concerned client administrator who read about this news that just broke while I was sleeping.... I still had no idea what it was, I just woke up, and I remember typing the vulnerability into Google and the first result was that Amazon Lambda was already patched. For me, as a small business owner, the fact that somebody else dealt with that while I was sleeping is amazing.”
Maliyil said the platforms operate on a “shared security model,” where platforms take care of many security vulnerabilities, but customers are responsible for the security of their own code.
“As a victim of the limitations of App Engine, the good consequence is that it’s more secure.”
“They provide the security up to a certain level, but the last 50 percent is really up to you,” he said. “Customers could still make pretty fatal errors ... but they do guide you in terms of securing it, locking it down, best practices.”
The security gained through using serverless is itself a consequence of the limitations inherent in the platforms, all of which developers need to weigh when deciding whether to go serverless.
“If you do your own virtual machine, then yes — you’re more likely to make basic security mistakes that make you vulnerable, because you have more control,” Maliyil said. “As a victim of the limitations of App Engine, the good consequence is that it’s more secure, because they only let you do things a certain way.... The limitations lend itself to better security, kind of naturally.”