In explaining DevOps, a tech buzzword is helpful: “siloed.” In plainer terms, disconnected or isolated. That’s what software development and operations teams used to be in relation to one another. An organizational philosophy that unites the two, DevOps is often visualized as an infinity loop. The dev side codes, builds and tests the software, while the ops side releases, deploys and monitors how it functions in the real world. And since they’re intertwined, teams can quickly find and fix any issues that arise.
On a more granular level, DevOps includes tracking code changes, adhering to the now-standard model of continuous integration and continuous delivery (CI/CD), scheduling and orchestrating containers, managing configuration, and careful, consistent post-release monitoring. Some of the applications on which DevOps teams rely to achieve all that are essentially standardized across the tech board; others vary depending on preference and environment.
Top DevOps Tools
- GitHub
- GitLab
- Bitbucket
- Jenkins
- Docker
- Kubernetes
- Microsoft Azure
- SaltStack
- Terraform
- Amazon Web Services
While that’s all as highly technical as it sounds, DevOps is more about getting as close as possible to frictionless, hand-in-glove collaboration than it is about gear-head tool obsession.
But once the foundational buy-in is cemented and the time arrives to talk shop, an organization must bear in mind some key application distinctions. Here are 17 DevOps tools you should know.
Source Control Management
Git is the go-to system for managing changes to source code. It’s the de facto standard.
“In source control management, there was a battle and Git basically won, for better or for worse,” Matt Stratton, a DevOps advocate, told Built In in 2019.
There are a few likely reasons for Git’s victory. The fact it was built for Linux kernel development, its rise coincided with a broader push toward open source and its distributed structure — which lets every developer on the team see the program’s full source code — all secured its pole position. And now any code repository host with hopes of real adoption had better support Git.
GitHub
As Git became the standard, GitHub became its standard host. In software engineering circles, the GitHub icon is as ubiquitous as a Twitter or LinkedIn avatar link on a personal homepage. And once again, it came down to the open source angle.
“Social coding is where GitHub led the charge,” Stratton said. “So it’s not just about versioning your file, it’s about the interaction of humans. You’re able to do a pull request, note the changes you want made and write into it the whole communication structure.”
Acquired by Microsoft in 2018 for $7.5 billion, Github facilitated a culture that ultimately pervaded the tech sector. Now, Stratton said, “people are able to use the same tooling and way of thinking on open source side projects as they do in their regular enterprise or startup day jobs.”
GitLab
As you might have guessed from its name, GitLab (like GitHub) is a web-based Git repository — but with some notably different features. Along with source code repo, GitLab also boasts native support for CI/CD and containers (more on both of those things below).
“Everything you would want to do in a modern development workflow is housed in a single place in GitLab,” Jeffrey Smith, senior director of production operations at Basis Technologies, told Built In in 2019.
That makes it accessible to newer users.
“For an organization that’s starting off with their DevOps journey,” Smith said, “the less friction, the easier.”
Bitbucket
When it comes to tools, DevOps pros are often inclined toward “whatever works.” All repo applications perform the same basic function, so organizational preference holds sway.
“As long as you’re using some Git-based tool, I don’t really think it matters whether it’s Bitbucket or GitHub or GitLab,” Stratton said. “People put too much energy into that.”
Outfits that use Atlassian can keep it simple by opting for Atlassian’s offering: Bitbucket. That way, “the path will be much easier from an organizational buy-in perspective,” Smith explained.
Continuous Integration/Continuous Delivery
A key DevOps tenet, CI/CD refers to the pipeline that ensures quick and efficient software development. Continuous integration, or CI, essentially means that developers do a build whenever merging branches of new code changes into the master branch.
“But just building the software these days isn’t enough,” Stratton said. “You have to deploy it.”
That’s where you get continuous delivery, or the idea that, just as DevOps automates testing, it also automates the release of the final version of an application.
To accomplish that, all CI tools and CD tools operate along similar technology and have similar features. And each tool is only as effective as the programmer behind it, Smith stressed.
“The bottom line is, it executes a command that you’ve written in-house,” he said. “So if your test suite sucks, it’s going to suck in CircleCI, and suck in Bamboo, and suck in Jenkins.”
So what makes one option more attractive than the next?
CircleCI
For the less cloud-averse, CircleCI is a popular standout. (Travis CI is another option, although it’s fallen by the wayside among many software engineers, Smith and Stratton both confirmed.) With CircleCI, you can enable GitHub checks, run images from Docker’s registry and easily debug through SSH or local builds.
Jenkins CI
Perhaps the central consideration in CI/CD is, How much should you trust the cloud? For some organizations, the answer is, Not so much.
“Maybe an organization doesn’t like pushing their proprietary information into a cloud provider,” Smith said. “Maybe it has a number of restrictions from sharing code with third parties that don’t meet particular standards.”
For that kind of situation, an on-premise option makes the most sense, and Jenkins leads the market there. Open source, it boasts an ultra-robust plugin ecosystem. And because it’s been around since 2011, it has established a solid performance track record.
GoCD
Back in 2010, ThoughtWorks veterans Jez Humble (a co-author of the 2019 Accelerate State of DevOps Report) and David Farley literally wrote the book on continuous delivery. Many of the ideas they first explored came to bear in a tool called Go, now known as GoCD.
“It’s intended as an implementation of a lot of those core principles,” Stratton said.
The free and open-source tool uses parallel and sequential execution to break down each pipeline into stages, then jobs, then tasks. That means developers can draw a pipeline with plenty of fine-point detail and repurpose those details across other pipelines.
Containers
If you’ve spent time around developers, you’ve probably encountered a shirt or sticker emblazoned with the shrug emoticon and this classic nugget of coding mock-resignation: “It works on my machine.” Stratton referenced that infamous eye-roller line to illustrate what a milestone containers really were.
Containers effectively eliminated that headache by standardizing the development environment and making it predictable, no matter whose eyes it happens to be under.
“Containerizing basically provides a level of isolation that says this process does this one thing, and everything that it needs to do can be moved [between environments],” he said.
It splits packaging from the actual run environment, which means developers know that what works here will also work there — be it a laptop or server, cloud or data center.
“It really puts the focus on the application and makes it a lot easier to ship.”
Docker
Containers as we now know them were a watershed, but they aren’t exactly new. Linux containers have existed for a very long time, Stratton noted, but they were difficult to use. Docker provided an accessible format and approach, igniting still-mounting (and lucrative) enthusiasm around the industry. In fact, Docker did it so effectively that it became a synonym — what Kleenex is to tissue or Xerox to photocopying.
Kubernetes
Docker offered a way to package code so it could run the same regardless of environment. But it still needed to be operationalized, managed and scaled. This scheduling solution, “the plumbing of your infrastructure,” as Stratton put it, emerged as an open-source offshoot of Google’s internal Borg and filled that space. It soon became another “de facto standard.”
It is now commoditized by the cloud heavyweights — as EKS, AKS and GKE — but organizations still need to have an internal understanding of how to optimize Kubernetes.
“You want them to schedule for best availability and be able to scale up and down as you need to,” Stratton said.
Microsoft Azure
Microsoft Azure is a hybrid cloud platform that showcases hundreds of products and services. The platform provides container services for different applications, but teams that already use Kubernetes may lean toward the Azure Kubernetes Service (AKS). Automated workflows make it easier to manage Kubernetes clusters and CI/CD pipelines in Azure, so teams can prepare and deploy both containerized and non-containerized applications in less time.
Configuration Management
While some in software development seem ready to file configuration management alongside Dillinger, disco and the dinosaurs, rumors of its death may be exaggerated. In a post-container world, the problem of software changes not being tracked — aka configuration drift — isn’t the problem it once was since engineers can usually just reset the container and go back to the previously defined state.
“But you still need to get to that initial state,” Smith said. “And how you get there is configuration management.”
CM means “having the ability to test [a configuration item] and to do abstractions against it,” Stratton said. “Just like I test my application, I need to put it through a continuous delivery pipeline, so that I understand when I run, this is going to do the thing I expect.”
Chef
It’s useful to think about configuration management in terms of imperative vs. declarative approaches. With imperative tools, like the written-in-Ruby Chef, users build a file that reads from top to bottom and execute commands as they’re set. For those who don’t speak software development, that basically means more control under the hood.
“It gives you an incredible amount of control over what’s happening, how it’s happening, and how those things are actually getting manifested,” Smith said.
At the same time, for folks who lean more toward Ops than Dev, it may be too complex.
“For some organizations that have traditional IT folks, who don’t have a lot of programming backgrounds, those teams could struggle to implement tools like Chef,” Smith said. “Because it’s more programming focused.”
Puppet
On the more streamlined (if less minutely controllable) declarative side, Puppet remains a go-to.
“You don’t really need to worry about the specific implementation bits; you just define the final state of how the thing should look,” Smith said.
Instead of imperative’s top-down read, Puppet effectively figures out a dependency tree. Or, as Smith explained, “what needs to be created first before another can be. It does a lot of that magic for you.”
Other popular declarative options include Ansible and Terraform. But Puppet, like Chef, has an advantage over those two in at least one sense: it’s been around longer and has built up a robust community.
SaltStack
Beyond the imperative or declarative approach, another configuration-management consideration is what’s called remote code execution — or the ability to execute a command across a series of servers. That means every server that meets a given criteria can execute a corresponding command.
“And that’s great in an environment with a ton of servers,” Smith said. Such an organization may do well with SaltStack, which Smith lauded for its “excellent remote execution capabilities.”
Terraform
The creation of HashiCorp, Terraform is an infrastructure-as-code tool that creates readable configuration files on companies’ cloud platforms via APIs. Teams can store, share and reuse these files, managing simpler components like networking resources and more complex elements like SaaS services. Although Terraform is technically a provisioning tool, it pairs well with configuration management tools like Chef, Puppet and Ansible.
Monitoring and Observability
Once an application is built and released, DevOps teams must constantly ensure that it’s running as intended. A slew of monitoring tools now exists — ranging from the more traditional alert systems to dashboards that provide a lot of information at a glance — that allows engineers and operations departments to stay on top of performance.
But the most talked-about emerging development in monitoring — and perhaps DevOps as a whole — is observability. It’s a complicated, much-debated term, but the gist of it is this: now that our systems have become so complex, DevOps monitors don’t even necessarily know which issues to watch for or what alerts to set. In light of that, it’s necessary to take more of a macro, bird’s-eye view.
Prometheus
Before we dive into observability, let’s look at more traditional monitoring. In terms of the divide between basic alerts and dashboard-based visualization, Smith argues, software engineers should move toward the latter. More specifically, that means employing tools like Prometheus that “bring together metrics, learning and visualization in a single place.”
That illustration of data provides a greater degree of clarity than a more binary tool, like Sensu, that simply alerts users when a certain command spits out an unexpected result. In terms of troubleshooting, however, that gets you only so far.
PagerDuty
“Single pane of glass” has been an IT cliché for years, but there’s a reason the concept persists. Any tool that actually unites a thicket of inputs in a single dashboard is an extra-strength headache reliever. PagerDuty essentially aggregates various digital signals, including monitoring tools, while also streamlining alerts. That helps prevent alarm fatigue and allows responders to take appropriate action. However handy a tool is, though, internal best practices are a must.
“A big part of monitoring is not just getting the signals, it’s also having a good incident-response culture,” Stratton said. “That’s a big part of DevOps.”
Honeycomb
Depending on whom you ask, the emergence of observability as a distinct term may or may not have a marketing component. But there’s genuine technological advancement behind it, too. The pacesetting company in this realm is Honeycomb, which lets evaluators collect data through distributed tracing across a full range of information.
“It enables us to ask new questions and look for hotspots and correlations because we do have these very complicated systems,” Stratton said.
As Smith put it, observability is “monitoring with the technical constraints that we used to have on monitoring removed” — namely the inability to add and enrich metadata to metrics.
Honeycomb is also home to several influential figures in not only DevOps, but software engineering in general, including Liz Fong-Jones and company co-founders Christine Yen and Charity Majors, who has arguably done more than anyone to define and popularize observability.
Amazon Web Services
Amazon Web Services (AWS) is a flexible cloud platform that has established itself as a leader in cloud security. The platform’s infrastructure is designed to meet the security standards of global banks and even the military. To reinforce its reputation, AWS has expanded its offerings of observability and monitoring services. While the AWS Marketplace still features monitoring products like AppDynamics and Splunk, companies can also choose between observability products like Dynatrace, New Relic One and Sumo Logic.