Three Services That Make the Cloud Worth It

RDS, CloudFront, and EC2 Time Synchronization Service are all managed cloud services that offer developers great bang for their buck.

Written by Ying Wang
Published on Jun. 13, 2020
A visualization of the cloud
Brand Studio Logo

One of my favorite YouTube shows is Worth It by BuzzFeed. On the show, three hosts try a certain cuisine made by three different restaurants, sold at three different price points. At the end of the tasting, the hosts judge which restaurant they liked the most and explain why. Besides blasting glorious footage of sushi rolls and steaks across your screen, the show also gives you a peek into how connoisseurs weigh judgments based on value and cost. The expensive dish, though almost always the best quality, doesn’t always come out on top. Many times, the hosts would rather take their significant others on a date night to get the medium-priced dish. Or they opt for the cheaply priced dish that reminds them of home-cooked food from mom and dad. Value comes in all shapes and sizes.

Cloud-native services are much the same. I’ve learned a great deal about a variety of AWS services over the past few years, and they all have different value propositions. I recently talked a bit about the clouds kernel of optionality: if you want to have the most control over your stack, you’d want to stick to a small number of extremely reliable and replaceable services. If you relax certain constraints, however, you can consider some managed services that I think are well worth the bang for your buck.

Related ReadingThe Cloud’s Kernel of Optionality

 

RDS

AWS Relational Database Service is a managed offering of a collection of different databases, such as MySQL and PostgreSQL, among others. AWS RDS has some limitations that prevented me from leveraging it for the personal project I’m working on for my sabbatical, which led me to learn how to deploy a database on AWS myself. In doing so, I’ve learned to appreciate just how much time RDS could save an engineer in production.

As just one example, database administrators (DBAs) typically configure a root username and password combination for administration purposes. Since mistakes in handling root privileges can result in destructive action around customer data, AWS removes the human from the loop in secrets creation by using AWS Secrets Manager to automatically create a secret key/value pair acting as the database’s username and password, which may be seeded by AWS Key Management Service. The longer a secret remains in active usage, however, the more vulnerable and stale it becomes as encryption methods and services may become compromised. Therefore, it’s best practice to rotate your secrets every few months or so.

Every database has a different configuration file and may have different life cycle behavior (e.g., signal codes and messages) when starting up and shutting down. Therefore, automated secrets rotation takes place using AWS Lambda custom resources and is defined using AWS CloudFormation, AWS’s infrastructure-as-code solution. Since the underlying AWS Lambda runtime supports each language’s deprecation schedule, this Lambda function likely needs updating whenever a major version release of the underlying runtime is published and accepted by the AWS Lambda team. Although AWS Lambda guarantees older functions can consistently run if not updated, updating or creating a new function requires compatibility with one of Lambda’s current runtimes. Creating a new RDS instance implies creating a new automatic secrets rotation tool, which would need to create a new function.

If this sounds complicated to you, don’t worry  it sounds complicated to me too. That’s why I, googly-eyed after realizing the colossal difficulty of implementing this feature, simply chose not to do so. The nice thing is that AWS Secrets Manager automatically knows how to rotate secrets for AWS RDS resources. Since the process is automatic, RDS can rotate these secrets every few days instead of every few months, and there’s no overhead to doing so.

RDS also conveniently handles other tasks, such as database migrations, high availability and replication / failover, and automated back-ups, all of which have far more serious consequences for your data integrity and availability. This is why AWS RDS is one of the most expensive AWS services I would still highly recommend. Trust me, as somebody who has spent the past four months not shipping, if you don’t use RDS, your path to shipping grows much longer. That’s simply not a good enough risk for a reasonably monetizable product.

If you really don’t want to use RDS due to some of its limitations (e.g., no root access to the server, lack of PostgreSQL extensions), I still have good news for you! You don’t need to choose either/or. For example, if you’re using PostgreSQL, and you’ve created an online analytical processing (OLAP) tool that requires the usage of custom PostgreSQL extensions, simply deploy a PostgreSQL instance to EC2 using Packer and CloudFormation, and connect to an RDS instance using PostgreSQL’s postgres_fdw foreign data wrapper (FDW) to create foreign table references in your custom instance. AWS RDS supports connections to other PostgreSQL instances in this manner. Now you get the reliability of RDS and the customizability and optionality of your database-as-an-app!

 

CloudFront

I’ve worked professionally as a front-end developer, and what I really love about end-user clients is how much return you get on your investment. They’re cheap to make and maintain, especially when compared to serious backend infrastructure. They also sell the product for many non-technical stakeholders. I’ve found the secret to JavaScript apps is AWS CloudFront. CloudFront is a managed content delivery network (CDN) that brings content “closer” to the user by putting it on many different servers all around the world. It connects to AWS S3 for static site hosting (the “npm build” step you might see for various React.js applications), and AWS Route 53 (the managed DNS service, or what takes $CLOUDFRONT.cloudfront.net to $YOUR_DOMAIN.com).

Combining CloudFront with the usage of a static site framework like hugo or a performance-based JavaScript framework like Preact, you can increase your performance benchmark scores (such as those generated by Google Lighthouse) quite a bit. For example, my techblog, Bytes by Ying, has a score of 98 percent, with a total blocking time of 80ms. Increasing site performance reduces customer churn and helps optimize SEO, which makes sales and marketing happier.

You can also easily scale your website to support other developers. I template my sites using AWS CloudFormation and Makefiles, so set-up from scratch involves running one command. You can also deploy JavaScript applications on a per-PR basis via AWS CodeBuild and AWS CodePipeline via automated or one-click deploys for pull request validation, at little to no cost. Automated UI/UX testing is great, but to get the feel and finish you need a human touch. You can also do that via GitLab, Netlify, or other platforms, but I personally like AWS for its optionality (e.g., swapping CodePipeline for Jenkins), and how its costs don’t increase with additional users. You just create a new IAM account, which is no biggie, versus Netlify’s panic-inducing price increase after three developers.

All these features are really nice, and made available at a fantastic price point. CloudFront is free to use for two million HTTP requests and 50 GB transfer bandwidth per month, which is plenty for most static websites.  Of course, it isn’t really meant for sending traffic in the single-digit megabytes or smaller. CloudFront’s real usage is likely heavy-duty audio/video transcoding and streaming, when you’re sending tens of gigabytes of data per request and having many, many possibly concurrent requests. Think Super Bowl live streaming numbers. So when the underlying CDN is designed, tested, and built for that kind of traffic, your websites are and always will be small fish in the pond. Hence, very competitive pricing (according to AWS Cost Explorer, CloudFront has cost me $1.61 this year so far for 10 static websites), as opposed to a dedicated site hosting solution like Netlify which may cost much, much more.

 

EC2 Time Synchronization Service

From reading books like Designing Data-Intensive Applications by Martin Kleppmann, I’ve learned one of the trickier issues around distributed systems is consistency. Time synchronization undergirds all consistency issues, since if two computers cannot agree on what time it is, they cannot order operations like database transactions. Hence, some cloud-native solutions like Google Cloud Platform (GCP)’s Cloud Spanner leverage a time synchronization API like the TrueTime API for certain operations, like creating a database snapshot, in order to have more than “5 9s” (>99.999 percent) end-to-end availability. Time synchronization involves using precise hardware like atomic clocks and network ownership, features that may be prohibitively expensive to implement in an on-premises solution.

AWS offers something similar in its EC2 Time Synchronization Service, which grants every EC2 instance access to Amazon’s own atomic clock references. Although this service isn’t terribly complicated from an end-user perspective, it forms a fundamental assumption behind more complex system designs, that wouldn’t be possible otherwise. If you have deployed a multi-host solution on AWS, make sure to turn on time sync on your hosts, as one mitigating factor in avoiding reading misleading logs.

I think it’s really easy to miss this service. I personally didn’t know it even existed for all EC2 instances until I happened to read about it by chance. It’s also likely to be under-appreciated. If it’s turned on and works properly, you don’t really know it’s there, until a leap second is applied every so often. I’ve found my appreciation for great documentation and simplicity in system design has increased from learning more about this service.

And the best part is it’s completely free. It’s one static IP address Amazon manages, and you leave a process to poll for the correct time to update on your service. Since synchronization is handled by the client compute instance, and since requests are lightweight, the service scales quite well.

Cloud services can be difficult to manage and scale, which may make it more difficult to meet certain development requirements. Luckily, cloud providers have built-in support for some of the cloud’s trickiest propositions, and offer much, much more than what I’ve presented here in services that offer great convenience and faster development velocity to minimize time-to-ship.

 

Explore Job Matches.