Subsplash
A company’s growth doesn’t occur in a vacuum. As it evolves, so too do the processes needed to carry out its mission. Ben Johnson, software development engineer at Subsplash, can attest to that much.
“As our organization transitioned from a small team of devs to a larger engineering group, difficulties in collaboration, consistency and testing started to arise,” said Johnson.
How did the engineering team at the Interbay-located startup, creator of mobile apps, websites and other engagement platforms and tools for churches, adapt? According to Johnson, the team incorporated containers into its DevOps strategy, catalyzing quick software development and deployment. With thousands of clients using Subsplash’s tech, preserving such high velocity is crucial.
“We balance the need for best practices and testing new methods through a unified goal of maintaining high quality and velocity, both individually and as a group,” said Johnson.
Ben Johnson
SOFTWARE DEVELOPMENT ENGINEER
What DevOps best practices have been most impactful for your team?
Our development and operations team rely on various DevOps best practices, such as Agile methodologies, continuous integration, continuous testing, integrated change management and continuous delivery. A particular area that stands out as most impactful has been our implementation of containers, which has helped improve integration testing and enhance our ability to quickly develop and deploy software to production.
We believe all best practices offer room for improvement.”
How have they evolved over time?
As our organization transitioned from a small team of devs to a larger engineering group, difficulties in collaboration, consistency and testing started to arise. We first started using containers to standardize our development environment and implement integration tests for our microservices. As the number of microservices in our environment grew, it was extremely helpful to abstract away the complexities of a microservice's dependencies — database, cloud services and other microservices — and remove the pain of setting up a development environment to run everything locally.
Containers also provided a consistent, repeatable and scalable method for writing tests that communicate with real dependencies. Tests could now be run both locally and in our CI pipelines. The final product is that developers are now able to focus more on the task at hand: delivering quality software through test-driven development and a prompt feedback loop.
How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies, tools or strategies?
We incorporate best practices into our work in order to realize the efficiency and process gains from industry wisdom. This unifies our development teams, improves efficiency through standardization and allows metrics in code quality and speed to be attained. We promote these best practices through documentation and communication with all teams. In addition, best practices related to coding style, linting and testing are enforced through our CI pipeline.
We believe all best practices offer room for improvement. As such, we encourage our teams to innovate, research and explore new methods and tools. We balance the need for best practices and testing new methods through a unified goal of maintaining high quality and velocity, both individually and as a group.
We encourage our teams to innovate, research and explore new methods and tools.”
When the desire to test a new tool, strategy or methodologies arises, we first begin by defining a goal and success criteria. We then proceed to experiment with, and implement, the new method or tool into our work, monitor adoption rate and adjust as needed. We evaluate the success of adoption across the department. Full adoption attainment might be curtailed when we become accustomed to a method of doing something and self-change becomes a high level of effort.
In order to prevent friction points, we document and communicate the benefits of a new method, as well as solicit feedback from our peers. However, once adoption is at a majority, we update our policies and CI pipeline enforcements to ensure that a full adoption is received and maximum benefit is attained. It has been our experience that once everyone has had a chance to try out the new methods and experience the benefits, most people come around and are won over.
Goat
Trust is important in any relationship, but it’s crucial for professionals working in DevOps.
A key aspect of DevOps culture involves teams having enough agency to make decisions and apply changes on their own. Giving team members ownership of projects from planning to post-mortem is integral to the success of any DevOps org. After all, it’s difficult for continuous integration and speedy updates to take place when heavy oversight and lengthy approval processes might slow down production.
One LA tech company emphasizing ownership in its DevOps practices is GOAT, a tech platform for global style aficionados looking for the latest apparel, footwear and accessories. Staff Engineer Eric Lesh said engineers at the platform are responsible for the creation and management of their apps, which leads to satisfaction in their roles and pride in their work.
DevOps professionals are continuously implementing new tools and processes in their work. And when a tool is used for the first time, the team members leveraging it are often seen as resident-experts in its usage. Lesh said by trusting the DevOps team to document the efficacy of new resources, GOAT leaders were able to make informed decisions about whether those tools were right for the business.
Eric Lesh | Staff Engineer
What DevOps best practices have been most impactful for your team?
We believe in ownership, which we encapsulate in the DevOps principle that every team can build, deploy and operate their own applications. Engineers love to see their code in production, and this model lets them deliver new features on their own schedules.
We see individual teams experiment with new tools often.”
How does your team determine the best new technologies to implement?
Using the ownership model, we see individual teams experiment with new tools often. They develop expertise and work out the kinks on their own, documenting as they go, so the rest of the organization benefits. The strongest, most effective technologies are naturally selected by other teams based on their peers’ experiences.
Amount
Mike Esler | VP, TECHNOLOGY OPERATIONS
What DevOps best practices have been most impactful for your team? How have they evolved over time?
The most impactful practice is IaC. This allows us to unify the change management and approval cycle across development and operations. It also helps us scale, enabling rapid spin-up of new environments in a reliable state. As an AWS customer, we started with CloudFormation. Since then, we have made Terraform our standard, enabling us to take a more cloud-neutral approach to IaaS.
Our initial focus was on getting the infrastructure defined in code and improving our review cycle. But over time we’ve made more and more of our infrastructure immutable. The next big step for us is setting up CI/CD for our infrastructure changes.
Ultimately, you can’t avoid risk. You have to manage it.’’
How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies or tools?
You absolutely have to do both. If you misapply new methodologies, tools or strategies, you expose the company to risk: risk of a bug, risk of an outage and risk of a vulnerability. However, burying your head in the sand and pretending that new methods and tools aren’t emerging also exposes the company. You risk not taking advantage of innovation and efficiencies that your competition surely is.
Ultimately, you can’t avoid risk. You have to manage it. For Amount, this starts in our OpsLab environments, which are essentially sandboxes where teams can experiment with things. Ideally, in this environment, we build up a new set of best practices related to a new approach or technology. Eventually, it will need to make its way into production. We try to de-risk before it gets there using traffic steering to slowly build workload over whatever time period we think is appropriate.
Abacus Insights
Abacus Insights might count the ancient counting instrument as its namesake, but that’s about as old-school as it gets for the healthcare data startup. Recipient of a $12.7 million Series A funding announced last summer, Abacus endeavors to provide healthcare insurers with easy-to-access, digestible info by harnessing the power of the cloud, or, as CFO Larry Begley previously told Built In, by “unlocking the value of payers’ vast amounts of insightful data” that’s bound up in old systems.
In this case, product echoes practice: just as the Boston company erases industry-standard silos in the name of transparency and accessibility, the DevOps team at Abacus strives for a similarly frictionless framework. The goal is to keep deployments and workflows humming. Recently, the team illustrated that ethos when they implemented a decoupling process to align their platform’s core deployment model with a defined stratification between infrastructure and application code, Cloud Operations Support Engineer Seth Allen said. The result? Improved efficiency of abstractions, enhanced predictability of software deployments and other benefits. The cumulative gain was clear.
“Through these efforts, an appreciably complex application deployment has been abstracted to a handful of simple CLI commands,” Allen said.
Below, Allen elaborated on the core tenets that buoy the team’s integrative, “all-hands-on-deck” attitude toward DevOps.
Seth Allen
CLOUD OPERATIONS SUPPORT ENGINEER
What DevOps best practices have been most impactful for your team? How have they evolved over time?
An initiative was recently finalized to align our platform’s core deployment model — comprising nearly 20,000 lines of Terraform and over a dozen Helm charts — with a more rigidly defined stratification between infrastructure and application code through a decoupling process. This has provided numerous benefits to many teams.
We’ve been able to leverage more efficient abstractions and design patterns.”
We’ve been able to leverage more efficient abstractions and design patterns and have centralized and increased transparency of deployment configurations. We’ve also gotten more granular control over development and testing workflows, all in the name of increasing predictability. Through the application team's efforts to provide its component consumers with even more effective ways of managing their resources, much of the Kubernetes deployment automation has evolved to feature custom deployment configuration, secrets management, command-line arguments and additional time-saving features. Through these efforts, an appreciably complex application deployment has been abstracted to a handful of simple CLI commands.
How does your team balance a need to incorporate best practices into their work with the desire to try and test new methodologies, tools and strategies?
There are strong factors at Abacus in general that facilitate this decision-making process in addition to intra-team management. Principally, the existence of a well-architected, cloud-native platform virtually eliminates the inertia for novel, experimental feature work to take flight by reducing the staging time and overhead, the region in which poor design decisions often take root, due to lack of development time or lack of established scaffolding to build from.
The opposite pole of this balancing act is taking a consistent and thoughtful “all-hands-on-deck” approach to feature grooming and inter-team communication to establish a broad, yet sane, range of implementation parameters and acceptance criteria. Together, these two factors create a tinderbox of development support, needing only a good idea and a purposeful strike to light the spark.