Automation is a vital aspect of DevOps work. Teams benefit from being able to replace manual tasks like defining particular changes to code builds or reducing potential bottlenecks in processes including event prioritization during incident monitoring.

“When beginning the journey of building a new product, we make heavy upfront investments in automation,” SailPoint DevOps Director Marty Bowers said. 

Those investments help Bowers and his team stay nimble and release product quickly, while also granting them the ability to repeat and scale the processes that work for them. 

Though automation is popular among DevOps teams across industries, keep specificity in mind. Each DevOps teams has different goals and metrics for success, so their approaches to automation will be different. 

Find out who's hiring.
See jobs at top tech companies & startups
View All Jobs

For instance, DevOps Engineer Jake Newton and his team at Datical sought to move away from manually updating Amazon Machine Images, and decided instead to build virtual templates with the automation-friendly image builder Packer. The team also started performing parallel automated testing.

When adopting a new resource, automation-related or otherwise, the DevOps teams we spoke with stressed the importance of measuring side effects. Mastering a tool doesn’t happen overnight, and production time, product quality and other parts of a business could be negatively impacted if an ineffective solution is chosen. Bowers said DevOps pros should consistently evaluate the influence their latest approaches have on other areas of the business. Learn more about the best practices these DevOps teams follow closely. 

DevOps Best Practices To Know

  • Utilize automation as well as continuous integration and delivery
  • Establish consistent workflows and processes
  • Trust your team to make decisions and changes
sailpoint team
SAILPOINT

SailPoint

Taking risks can bring big rewards, but as a company scales, those risks should be more calculated, as there’s often more at stake. SailPoint DevOps Director Marty Bowers said DevOps teams should be mindful of the effects their choices in new tools and processes have on other key stakeholders.

 

What DevOps best practices have been most impactful for your team?

When beginning the journey of building a new product, we make heavy upfront investments in automation and continuous integration and delivery. These strategies allow the DevOps team to stay lean while getting features into our customers’ hands as quickly and safely as possible. As the team has matured and gone through this cycle multiple times, our focus has moved toward repeatability and reusability of code to expedite the overall process. And of course, we continually iterate on and improve our observability over all our SaaS products.

We’ve moved more from bleeding-edge to cutting-edge when it comes to our methods, tools and strategies.”

 

How does your team balance a need to utilize best practices versus a desire to test new resources?

Over the past several years, I’d say we’ve moved more from bleeding-edge to cutting-edge when it comes to our methods, tools and strategies. I think this transition naturally happens as a team or an organization matures. As our customer base grows, expectations around our Customer Satisfaction Score (CSAT), quality, uptime and stability have also grown. We have to weigh the risk of anything new that could compromise one or more of those expectations against the impact it yields. So being mindful of that risk and thoroughly testing is our only path forward.

 

Daticle
DATICLE

Datical

Experimentation is a key part of discovering the tools that work best for any DevOps team. Datical DevOps Engineer Jake Newton shared how a discovery process was built into his team’s story point estimation. Newton and his colleagues built in extra time for their work processes to give them opportunities to explore more efficient ways of reaching their production goals. 

 

What DevOps best practices have been most impactful for your team?

Automated testing and continuous integration. We started with a lot of manual testing and single-branch builds. Over time, we moved to parallel automated testing and multi-branch builds, which reduced our test cycle from around 48 hours to under four. 

We established the same consistent workflows and processes for all components and microservices. Branch-naming consistency was implemented whereby all work originates from a Jira ticket and all implementation is done on a feature branch referencing that ticket. Commits pushed to the branch trigger the automated building of the branch. We also use isolated databases per build for testing.

By taking a little bit of extra time with each story, we’re able to chip away at product and non-product tech debt.”

 

How does your team balance a need to utilize best practices versus a desire to test new resources?

We encourage Spike work by utilizing branch-based development as well as employing ephemeral instances using tools like Docker. These practices allow our development team to do exploratory testing while still maintaining a stable codebase.

We also try to pad story points a little so that we can investigate better tools or strategies instead of always working in the same way. Rather than manually updating Amazon Machine Images, we now use Packer templates, for example. We’ve also started transitioning our infrastructure over to Terraform to make it easier to maintain, scale and implement best practices. By taking a little bit of extra time with each story, we’re able to chip away at product and non-product tech debt with each sprint as opposed to trying to address it all at one time.

 

Subsplash Seattle

SUBSPLASH

Subsplash

A company’s growth doesn’t occur in a vacuum. As it evolves, so too do the processes needed to carry out its mission. Ben Johnson, software development engineer at Subsplash, can attest to that much. 

“As our organization transitioned from a small team of devs to a larger engineering group, difficulties in collaboration, consistency and testing started to arise,” said Johnson.

How did the engineering team at the Interbay-located startup, creator of mobile apps, websites and other engagement platforms and tools for churches, adapt? According to Johnson, the team incorporated containers into its DevOps strategy, catalyzing quick software development and deployment. With thousands of clients using Subsplash’s tech, preserving such high velocity is crucial.

“We balance the need for best practices and testing new methods through a unified goal of maintaining high quality and velocity, both individually and as a group,”  said Johnson.

 

Ben Johnson

SOFTWARE DEVELOPMENT ENGINEER

What DevOps best practices have been most impactful for your team? 

Our development and operations team rely on various DevOps best practices, such as Agile methodologies, continuous integration, continuous testing, integrated change management and continuous delivery. A particular area that stands out as most impactful has been our implementation of containers, which has helped improve integration testing and enhance our ability to quickly develop and deploy software to production.

We believe all best practices offer room for improvement.” 

 

How have they evolved over time?

As our organization transitioned from a small team of devs to a larger engineering group, difficulties in collaboration, consistency and testing started to arise. We first started using containers to standardize our development environment and implement integration tests for our microservices. As the number of microservices in our environment grew, it was extremely helpful to abstract away the complexities of a microservice's dependencies — database, cloud services and other microservices — and remove the pain of setting up a development environment to run everything locally. 

Containers also provided a consistent, repeatable and scalable method for writing tests that communicate with real dependencies. Tests could now be run both locally and in our CI pipelines. The final product is that developers are now able to focus more on the task at hand: delivering quality software through test-driven development and a prompt feedback loop.

 

How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies, tools or strategies? 

We incorporate best practices into our work in order to realize the efficiency and process gains from industry wisdom. This unifies our development teams, improves efficiency through standardization and allows metrics in code quality and speed to be attained. We promote these best practices through documentation and communication with all teams. In addition, best practices related to coding style, linting and testing are enforced through our CI pipeline. 

We believe all best practices offer room for improvement. As such, we encourage our teams to innovate, research and explore new methods and tools. We balance the need for best practices and testing new methods through a unified goal of maintaining high quality and velocity, both individually and as a group. 

We encourage our teams to innovate, research and explore new methods and tools.”

 

When the desire to test a new tool, strategy or methodologies arises, we first begin by defining a goal and success criteria. We then proceed to experiment with, and implement, the new method or tool into our work, monitor adoption rate and adjust as needed. We evaluate the success of adoption across the department. Full adoption attainment might be curtailed when we become accustomed to a method of doing something and self-change becomes a high level of effort. 

In order to prevent friction points, we document and communicate the benefits of a new method, as well as solicit feedback from our peers. However, once adoption is at a majority, we update our policies and CI pipeline enforcements to ensure that a full adoption is received and maximum benefit is attained. It has been our experience that once everyone has had a chance to try out the new methods and experience the benefits, most people come around and are won over.

 

GOAT team member
GOAT

Goat

Trust is important in any relationship, but it’s crucial for professionals working in DevOps.

A key aspect of DevOps culture involves teams having enough agency to make decisions and apply changes on their own. Giving team members ownership of projects from planning to post-mortem is integral to the success of any DevOps org. After all, it’s difficult for continuous integration and speedy updates to take place when heavy oversight and lengthy approval processes might slow down production. 

One LA tech company emphasizing ownership in its DevOps practices is GOAT, a tech platform for global style aficionados looking for the latest apparel, footwear and accessories. Staff Engineer Eric Lesh said engineers at the platform are responsible for the creation and management of their apps, which leads to satisfaction in their roles and pride in their work.

DevOps professionals are continuously implementing new tools and processes in their work. And when a tool is used for the first time, the team members leveraging it are often seen as resident-experts in its usage. Lesh said by trusting the DevOps team to document the efficacy of new resources, GOAT leaders were able to make informed decisions about whether those tools were right for the business.

Eric Lesh | Staff Engineer

What DevOps best practices have been most impactful for your team?

We believe in ownership, which we encapsulate in the DevOps principle that every team can build, deploy and operate their own applications. Engineers love to see their code in production, and this model lets them deliver new features on their own schedules.

We see individual teams experiment with new tools often.”

 

How does your team determine the best new technologies to implement?

Using the ownership model, we see individual teams experiment with new tools often. They develop expertise and work out the kinks on their own, documenting as they go, so the rest of the organization benefits. The strongest, most effective technologies are naturally selected by other teams based on their peers’ experiences.

 

Amount

Amount

Mike Esler | VP, TECHNOLOGY OPERATIONS

What DevOps best practices have been most impactful for your team? How have they evolved over time?

The most impactful practice is IaC. This allows us to unify the change management and approval cycle across development and operations. It also helps us scale, enabling rapid spin-up of new environments in a reliable state. As an AWS customer, we started with CloudFormation. Since then, we have made Terraform our standard, enabling us to take a more cloud-neutral approach to IaaS. 

Our initial focus was on getting the infrastructure defined in code and improving our review cycle. But over time we’ve made more and more of our infrastructure immutable. The next big step for us is setting up CI/CD for our infrastructure changes.

Ultimately, you can’t avoid risk. You have to manage it.’’  

 

How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies or tools?

You absolutely have to do both. If you misapply new methodologies, tools or strategies, you expose the company to risk: risk of a bug, risk of an outage and risk of a vulnerability. However, burying your head in the sand and pretending that new methods and tools aren’t emerging also exposes the company. You risk not taking advantage of innovation and efficiencies that your competition surely is.

Ultimately, you can’t avoid risk. You have to manage it. For Amount, this starts in our OpsLab environments, which are essentially sandboxes where teams can experiment with things. Ideally, in this environment, we build up a new set of best practices related to a new approach or technology. Eventually, it will need to make its way into production. We try to de-risk before it gets there using traffic steering to slowly build workload over whatever time period we think is appropriate. 

 

Abacus Insights Boston
ABACUS INSIGHTS

Abacus Insights

Abacus Insights might count the ancient counting instrument as its namesake, but that’s about as old-school as it gets for the healthcare data startup. Recipient of a $12.7 million Series A funding announced last summer, Abacus endeavors to provide healthcare insurers with easy-to-access, digestible info by harnessing the power of the cloud, or, as CFO Larry Begley previously told Built In, by “unlocking the value of payers’ vast amounts of insightful data” that’s bound up in old systems. 

In this case, product echoes practice: just as the Boston company erases industry-standard silos in the name of transparency and accessibility, the DevOps team at Abacus strives for a similarly frictionless framework. The goal is to keep deployments and workflows humming. Recently, the team illustrated that ethos when they implemented a decoupling process to align their platform’s core deployment model with a defined stratification between infrastructure and application code, Cloud Operations Support Engineer Seth Allen said. The result? Improved efficiency of abstractions, enhanced predictability of software deployments and other benefits. The cumulative gain was clear.

 “Through these efforts, an appreciably complex application deployment has been abstracted to a handful of simple CLI commands,” Allen said. 

Below, Allen elaborated on the core tenets that buoy the team’s integrative, “all-hands-on-deck” attitude toward DevOps. 

 

Seth Allen

CLOUD OPERATIONS SUPPORT ENGINEER

What DevOps best practices have been most impactful for your team? How have they evolved over time?

An initiative was recently finalized to align our platform’s core deployment model — comprising nearly 20,000 lines of Terraform and over a dozen Helm charts — with a more rigidly defined stratification between infrastructure and application code through a decoupling process. This has provided numerous benefits to many teams.

We’ve been able to leverage more efficient abstractions and design patterns.”

 

We’ve been able to leverage more efficient abstractions and design patterns and have centralized and increased transparency of deployment configurations. We’ve also gotten more granular control over development and testing workflows, all in the name of increasing predictability. Through the application team's efforts to provide its component consumers with even more effective ways of managing their resources, much of the Kubernetes deployment automation has evolved to feature custom deployment configuration, secrets management, command-line arguments and additional time-saving features. Through these efforts, an appreciably complex application deployment has been abstracted to a handful of simple CLI commands.

 

How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies, tools and strategies?

There are strong factors at Abacus in general that facilitate this decision-making process in addition to intra-team management. Principally, the existence of a well-architected, cloud-native platform virtually eliminates the inertia for novel, experimental feature work to take flight by reducing the staging time and overhead, the region in which poor design decisions often take root, due to lack of development time or lack of established scaffolding to build from.

 The opposite pole of this balancing act is taking a consistent and thoughtful “all-hands-on-deck” approach to feature grooming and inter-team communication to establish a broad, yet sane, range of implementation parameters and acceptance criteria. Together, these two factors create a tinderbox of development support, needing only a good idea and a purposeful strike to light the spark.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us