Marie Kondo is famous for developing her KonMari” philosophy — a method of decluttering living spaces that promotes removing items from your life that do not spark joy. The KonMari method emphasizes two key aspects: tidying objects by category (so choosing to tidy all clothes first instead of everything in the bedroom, for example) and then removing those that do not spark joy.

While it initially refers to the physical act of tidying up, its elegant simplicity for determining value and removing clutter makes it applicable to software testing as well.

How can KonMari work with test suites and what are the benefits of applying it to areas such as end-to-end (E2E) testing? Complete code coverage is not necessarily the most efficient approach to software testing and may not be the most efficient use of your resources.

Reconsider your approach to software testing and see how you can maximize your resources and remove clutter by using the KonMari method.

Determine the Net Value of an E2E Test

Consider if the code works as defined by requirements and whether it produces expected outputs or benefits stakeholders. Weigh that against the perceived cost of producing, running, and maintaining the test.

 

Applying the KonMari Approach to Software Testing

To start, how does one define “joy” in terms of software testing? Joy in software testing can be considered as value — which tests are truly valuable?

Test suites are often very good candidates for decluttering because any mature test suite will be composed of an unruly number of tests, which vary in stability, risk mitigation (literally how much quality they assure), and even overlap with each other.

This bloat is most common in E2E test suites (though unit and API test suites are not immune). E2E tests are often added for nearly every conceivable workflow on an application — and as the footprint of the application grows, the number of E2E tests necessary to keep up grows geometrically. This can very quickly lead to test suite clutter, even if the problem at first seems benign or maybe even useful in terms of comprehensiveness.

However, lots of smaller, innocuous tests continuously being added to a test suite will increase the time it takes to complete the testing process. I’ve seen E2E test suites that take a week to complete and are guaranteed to have a 20 percent failure rate. Such a test suite sparks no joy at all.

The trade-off needs to be worth it: The extra time it takes to run a test must provide a substantial net value to the test suite and overall software package. Testers need to ask themselves, “What is the net value of this test?”

Measuring the criticality of a test (which obviously plays into its value) can be done in several ways, which can differ depending on the project. Consider if the code works as defined by requirements and whether it produces expected outputs, or otherwise provides a benefit to stakeholders, whether that be developers, testers, or product owners.

Then weigh this against the perceived cost of developing a test — the time it takes to produce, run, and maintain a test, as well as other considerations such as tester burnout from the accumulation of tests in a test suite over time.

Tests must have a purpose to them. Tests created simply to meet arbitrary key performance indicators (KPIs), like 100 percent code coverage, are not critical. A tester should be able to reason about what value a certain test provides to the software. If no meaningful business requirement is compromised by a test failing, then the test is likely not critical. If the test is not critical to the codebase and takes more time and effort to develop than the value it gives to the test suite, then it should be discarded. Test suites should be looked at with a critical eye — prioritize aggressively and only keep those tests that spark joy.

But after you’ve decluttered, what about the tests that have shown net positive value? Part of the KonMari approach also involves appropriately categorizing that which you choose to keep. It’s very common to silo test suites by their location in a codebase either by grouping all unit tests, all integration tests, and all API tests, or by their utility in a codebase (such as grouping all tests for each method of a class together, regardless of the type of test). This allows for a much more nimble and agile approach to testing, rather than a single monolithic test suite that can take a long time to run.

Instead, consider breaking up test suites into groups that focus on different areas. For example, all tests regarding checkout on an e-commerce site or all tests related to user engagement. Those groups of tests can then be run at various stages of the software development life cycle as needed. Lower-level testing may run on every build, whereas end-to-end testing may only run before the software is deployed.

Not only does this speed up the pace of testing during development, as developers no longer have to wait for large test suites to run, but it also helps reduce testing clutter. By only testing that which has value and is relevant, testers can spend less time sifting through test results to find the relevant pieces.

 

There May Be Resistance

It can be very difficult to convince teams to remove tests. The “if it ain’t broke, don’t fix it” mantra still pervades software development and software testing, so it can be an uphill struggle to go against these old adages. Within testing, there is the added perception that there’s no such thing as “too much” testing.

However, software testing has to compete between providing value to a project without sacrificing productivity. Large test suites can take a long time to run and can take an even longer time to maintain. This software is constantly evolving, and these changes, therefore, require reiterating the value of each test. Does it still provide value to the project in some way?

Perhaps that value comes in the form of providing debugging support, assuring developers exceptions are properly handled, or otherwise meeting a defined business requirement. If it doesn’t anymore, then cutting down on those unnecessary tests has the additional benefit of reducing test suite runtime and test maintenance time.

There are other objections to removing test cases, such as the sunk cost fallacy (“I’ve already invested so much in this and I’m unwilling to throw that investment away”) and competing perceptions of value with regard to tests. This perception could be tied to frequency (Does it show to the majority of users or only very rarely?), location (the same bug occurring on different pages could be perceived very differently), or a completely different factor. But defining what value means for different stakeholders is key to determining which tests spark joy and which do not — and will further your understanding of the perceived value of your test suite among those stakeholders.

By applying the KonMari approach to software testing, the testing process can be streamlined. Not only does this make test suites easier and much less costly to maintain, but it reduces the amount of time they take to run without removing meaningful quality assurance value from your test suite. The KonMari approach emphasizes organization and simplification — two concepts that can radically change the way software companies approach testing.

Read More From Our Expert ContributorsThe War Between Web and Game Developers

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us