When it comes to QA best practices, the team over at InfluxData prefers to keep things simple.
“The most important best practice is not to have a separate QA team,” Barabara Nelson, head of application engineering, told Built In San Francisco.
Instead of making QA a separate function within the engineering department, InfluxData places the onus for developing quality code squarely on the shoulders of its developers, who write their own automated tests. According to Nelson, this practice has led to higher-quality code from developers and a greater feeling of ownership over the products they work on. It also helps the team maintain its breakneck release pace, with Nelson estimating that InfluxData has somewhere between 10 to 20 deployments per day.
InfluxData is certainly not the first company to move away from QA teams and likely won’t be the last, especially given the case Nelson makes for putting developers in charge of quality assurance.
What’s the most important best practice your QA team follows, and why?
Nelson: We do not have a QA team and instead hold the development team responsible for the quality of what they deliver. Every team is responsible for developing the tests to ensure that the code they deliver is reliable and works as expected. This mindset has led to higher-quality code and gives developers a much stronger sense of ownership of the product. One thing that has helped us make this shift is the evolution of test frameworks — like cypress.io, for example — and continuous delivery pipelines, which make it easy for developers to write automated tests that are run on every code change.
We also rely heavily on feature flags and encourage developers to release very small code changes at a time.”
How do you determine your release criteria, and can you give us an example of your typical process?
We have an automated CD pipeline and are constantly deploying new features into our cloud service. We have sets of tests that are triggered during the pipeline and if the tests fail the deployment fails, the team is notified.
For example, we have a set of unit tests that run prior to merging code to master. We have another set of tests that run when the master branch is deployed to staging, then again in pre-production and then in production. If the tests fail, we roll back the deployment. So, the release criteria is pretty simple: If the tests pass, you can release. We also rely heavily on feature flags and encourage developers to release very small code changes at a time. We can have the code in production but hidden behind a feature flag until we are ready to expose the new feature to our customers.
Project requirements can change rapidly, particularly in Agile development cycles. How do you ensure the QA team stays up to date on shifting requirements, and how do you plan ahead to ensure changes are handled smoothly?
We avoid part of this problem by not having a separate QA team. Every feature request needs to have clearly defined acceptance criteria before the developer starts working on the feature, and we use the acceptance criteria to define the tests that need to be written for that feature.
Defining the acceptance criteria typically happens shortly before the developer starts work on the feature, so we avoid having stale acceptance criteria if the requirement changes. If a requirement changes during development, we update the acceptance criteria and the developer updates their code and their tests to match the new requirements.
