Some things in the software industry never go out of style. Even as software development and testing have changed over the years, one testing technique remains in vogue: integration testing. It has survived the decline of the waterfall methodology, the popularity of agile methodology and all the way to today’s DevOps-driven development workflow.
But as a result of the constantly shifting landscape of software testing, integration testing is often confused with other types of testing, such as unit and end-to-end testing.
What Is Integration Testing?
“It’s a common thing to blur the lines between different forms of testing, but they all have different and important — but mutually exclusive — purposes,” said Aaron Schneider, senior solutions architect at Qualitest, a software testing consultancy.
What Is Integration Testing?
Integration testing is the layer of testing that occurs after unit testing and ensures that different parts of the product work correctly together. Integration testing is especially important for large and complex software projects because there are more interactions between different pieces of the codebase, but it is used across a wide variety of projects.
On the software testing pyramid, unit tests sit at the very bottom. They are a project’s first line of defense against bugs and are very self-contained in what they test. They are the easiest type of tests to write and are usually the most numerous in every project. In terms of volume, unit tests are written in almost a one-to-one ratio with the code, Schneider said.
At the top of the pyramid are end-to-end tests, the most time-consuming type of test. End-to-end tests are performed manually and usually performed through the user interface, after major stages of development have finished.
While unit tests are about testing the nitty-gritty logic within individual sections of code and while end-to-end tests are high level, integration tests reside in the middle and share some qualities of each.
Just like unit tests, integration tests are mostly written by developers and can be automated, but they test completely different features. Integration tests are about observing how code sections behave together and “making sure that they speak the same language,” Schneider said.
“I don’t care how you’ve implemented your module — I just want to make sure that we can talk to each other.”
He gave an example of a software program that interfaces with smartwatches to get the time. The capacity for miscommunication exists even for such a simple data exchange. For instance, the program might expect only to receive the time, but instead receives only the date. Or the watch might send the full date and time, but the program only expects data formatted for just the time. Then there’s also the issue of accounting for time zones when sending and receiving data across them.
While unit tests would be busy checking that the smartwatch’s value is calculated correctly, integration tests only care that the program is able to receive whatever data the smartwatch sent and in the expected format.
“I don’t care how you’ve implemented your module — I just want to make sure that we can talk to each other,” Schneider said about the purpose of integration tests.
Integration tests are also distinct from end-to-end tests because they are automated and meant to be written and run soon after code is available for testing.
“If integration testing can be part of the nightly automation testing, that is the goal,” Schneider said.
Some Projects Need More Integration Testing Than Others
It’s hard to find software projects that don’t need some form of integration testing. That’s because whenever a system interfaces with another system — whether internal to the project or with an external collaborator — it can run into integration problems.
“I can imagine some sort of demo projects or like a classroom project where you make a tic-tac-toe game that doesn’t have a UI,” said Joseph Moore, senior engineer at VMware, about software programs that don’t need integration testing. “But I think, in reality, that tends to be rare.”
Integration testing is more important to some industries and types of projects. Moore once consulted for a financial institution on how to create new software services that would be integrated with the company’s existing software. The company’s older systems meant they needed to pay particular attention to the integration process.
“This thing was very particular to the kinds of things you sent it,” Moore said. “You had to send it exactly the right kind of query — if you didn’t, you didn’t get what you needed for the application. So we wrote a lot of integration tests.”
Integration testing is also needed in places like software-hardware interfaces and software programs that are extremely segmented. Moore said one of the scariest situations he faces is when he rolls onto an existing project that has no tests.
“They say, ‘Yeah, but it hasn’t run in years, so don’t even bother,’ or ‘It’s so outdated that it fails all the time, but we just ignore it,’” Moore said. “And the confidence is just shot at that point — how do you know what is working and what’s not working?”
“The worst situation that a team or a company can be in is that they’re afraid to change their software, they can’t adapt to change.”
Well-curated tests can also help orient new developers and help them make sense of the codebase when people churn through a company. And having the right tests in place helps developers feel confident about writing more code, without fearing that the existing code will become unstable.
“The worst situation that a team or a company can be in is that they’re afraid to change their software, they can’t adapt to change,” Moore said. “At the minimum, testing gives the team confidence that if they change something and they have unexpected consequences, tests will fail and they can go and investigate it.”
But companies are usually constrained by the amount of time they’re willing to have developers and testers spend on testing, so they have to be at least somewhat picky about what type of tests to write and what areas of code get test coverage.
Moore said these types of decisions can be made by checking in with how confident developers feel about different parts of the application integration and asking other stakeholders on the project. Customers, other developers who have worked on the project before and project managers can be great resources for figuring out which areas of the application need more testing.
“We’ve also often found that the most knowledgeable subject matter experts on a product are the customer support folks because those are the people who are getting the phone calls and emails,” Moore said.
Flipping the Testing Pyramid
The traditional testing pyramid is based on the idea that unit tests are the quickest and easiest tests to write and run. Unit tests on the bottom of the pyramid, integration tests in the middle and end-to-end tests are on top. But that conventional wisdom is starting to turn on its head.
“I think that we’re at a time right now where the opposite approach probably makes more sense,” said Omed Habib, vice president of marketing at Tonic, a test data automation platform. “End-to-end testing actually, in my opinion, takes priority — the challenge, of course, is if something breaks you don’t know exactly where it broke, but at least you know something’s broken.”
Software development these days is usually fed into continuous integration and continuous deployment infrastructures, which help developers move code to production quickly after it’s written, with releases as often as several times a day. Habib said this type of development workflow is better suited for tests that can be written quickly and have more coverage.
“If you’re deploying a lot faster, you’re introducing a lot of risk,” he said. “You have to make sure that your testing and your QA can also execute really, really fast, and they have to execute in a way where they can catch bugs.”
Although unit tests don’t take long to write, software developers have to write a lot of them to get good code coverage because each test only tests a small piece of code. Integration tests are historically more difficult for developers to write.
“All the time, it seems like, there’s some new piece of testing technology that enables a new area of integration testing that was hard to do before.”
“It’s really messy in the middle, really messy,” Moore said. “It’s certainly harder to pick the layer that goes into those middle layers and segment them off and focus on them.”
Integration testing for code-database interactions once required writing complicated testing code because it was important to isolate that interaction from the rest of the codebase while testing. Developers would have to make sure the user interface doesn’t start running, that irrelevant functions don’t get called and that the database has the right kind of data needed for testing — and repeat that for each integration test.
Thankfully, many tools have appeared in the last few years that support developers writing integration tests. Moore said the development of more tools that help with integration tests might help integration tests gain more prominence within the testing stack in the future.
“All the time, it seems like, there’s some new piece of testing technology that enables a new area of integration testing that was hard to do before,” he said. “Those technologies are enabling more kinds of integration tests and are leading to more integration tests.”
In fact, there is debate in the software testing field over whether the traditional testing pyramid should now be changed to a “testing trophy,” Moore said. In a testing trophy framework, unit tests are no longer the most important and numerous. Instead, integration tests form the bulk of a project’s tests because they’re increasingly seen as having the best of both worlds. Just like end-to-end tests, each integration test can cover a lot of code and — with the help of more tooling — integration tests are also easy to write and automate, just like unit tests.
“People are talking about this quite a bit right now,” he said. “In my opinion, it’s less the rise of integration tests and more of the rise of integration test support — things that make integration tests easier to write, fast, very well scoped.”
Integration Testing Needs the Right Number of Stubs
Because integration tests examine the interactions between different sections of code, quite a lot of code from different parts of the codebase can be involved, making integration tests complex and difficult to write. To simplify matters and focus on just the interfaces between code instead of the code sections themselves, developers and testers turn to “stubs” to stand in for actual, production-quality code.
Stubs can be used whenever dealing with the actual code during testing creates a bottleneck or could cause problems. For example, an application may send out email notifications to users when they submit orders, but people shouldn’t receive emails every time a test is run. To fix that problem, developers may stub out the email notification service so it doesn’t trigger during the test.
Figuring out what code can and cannot be stubbed is important for writing effective integration tests: not enough stubs and your integration tests can trigger a bunch of unnecessary and annoying actions, even contaminating the production environment; too many stubs and you risk not testing anything of substance. When it comes to databases, for instance, you usually want to use an actual database with data in it.
“The quality of your test is only as good as how well you were able to mimic a production environment,” Habib said. “One of the biggest bottlenecks today in the testing world, especially integration or end to end, is failing to have test data that actually mimics a production environment.”
“The quality of your test is only as good as how well you were able to mimic a production environment.”
But it’s definitely possible to create a good balance that makes integration testing go smoothly while finding bugs efficiently.
“With an experienced team, the building of those dummies and placeholders is not such a complicated task,” Schneider said. “The more complicated thing is understanding what the integration is that we’re testing so that we can actually build that properly.”
As companies embrace patterns of development more focused around microservices and the use of containers, integration testing is only going to become more important for making sure highly segmented parts of the codebase can work together correctly.
Proper integration testing is also important between certain products. Schneider pointed to Apple as an example of a company doubling down on making good integration a priority so that every application can work with every device and consumers aren’t stuck without a way to use different products together.
“At every stage, in every product, there are interactions and integrations in one way or another,” Schneider said. “Whether that’s internally with its own software, between software and hardware, or between different devices in the real world, the integration is incredibly important.”