There is no such thing as a free lunch.

That quotation has many applications, tech included. For instance, look no further than the concept of time complexity, which is how we understand how long the code or algorithm we wrote takes to process what we give it and run successfully.

Why the talk about free lunch? Well, to the naked eye, the amount of time to run a function is imperceptible, so we are liable to forget it takes time at all.

Working with many functions, however, can cause a traffic jam — especially when those functions are chained together and can leverage multiple types of inputs that run multiple times. The more complex the system, say a system with thousands of functions, the more this jam can become an issue if not handled properly.

In this article, we’ll talk about time complexity, ways to track it, and different types of time complexity that an engineer can track.

## What Is Time Complexity?

Time complexity refers to how much computer time it takes to run an algorithm based on the inputs the algorithm receives.

More From Adam ThomasProduct Managers: Here’s How to Craft a Winning Strategy

## What Is Time Complexity?

Time complexity is a concept used in computer science. The term time complexity refers to how much computer time it takes to run an algorithm based on the inputs the algorithm receives.

Time complexity is not to be confused with run time, as run time is just based on input. Time complexity, however, is used to get a range of outcomes, from worst to best, to help understand the efficacy of a particular algorithm.

Understanding the best- and worst-case scenarios of an algorithm has a downstream effect on the health of a system. Not planning for time complexity can cause logjams in the rest of a system that relies on that algorithm. In real life, this has consequences.

For example, if an algorithm in the hospital is used to decide whether or not someone needs immediate heart surgery, then a logjam from a process caused by being unable to sort the data effectively could be a matter or life or death.

Now, most work won’t determine life or death, but my hope is that example makes clear that the work that relies upon understanding time complexity has consequences. Time complexity helps us not just look at how much power an algorithm needs to run, but understand the range of outcomes that could happen when we run it. In short, time complexity helps an engineer pick the right tool for the problem the algorithm is trying to solve.

## How Does an Algorithm Sort Data?

We’re in an information-dense world, and using an algorithm to find data is something all of us can understand. In fact, you’ve probably used a search engine to find this website.

The data you use to query a search engine has to match with with data in reach of that search engine. This is where sorting comes into play. The input, or search query, has to find a match, and the algorithm — in this case, the sort — is looking to make that match.

There are different types of sorting, and each has its own pros and cons. Each type also has a different time complexity, which affects how useful it is for different types of data. Let’s talk about a few types of sorting here to make the concept clearer.

## How Does an Algorithm Sort Data?

• Tree sort — In a tree sort, the algorithm looks at every single data point in the system. Think of it as the function looking through each tree on a branch to make sure it has the right data that it’s looking for. This approach isn’t optimal if you are working with a data store that is dynamic, but if it is static or has very few insertions, this sort is useful because it is easy to reuse.
• Bubble sort — In a bubble sort, the algorithm goes through the data and sorts it into order, usually numerical. Once all the data is sorted in that order, the algorithm is able to find where the data is by just following it. This means, although the first pass is computationally intense, each subsequent time is faster until new data is entered. This is useful for small amounts of data because bubble sort can help understand if a list is complete or is missing values based on how the data is sorted. The algorithm is pretty simple, making this a great teaching tool to understand the concept.
• Merge sort — A merge sort divides the data into different parts and works on them individually. By breaking the problem into smaller chunks, it can work through data much faster than the above two methods since they look at the data as one big object.

Above, I mentioned certain methods being faster and slower. The way you qualify which of these is more effective, since you won’t know the inputs, is through big O notation.

## What Is Big O Notation?

Big O notation is a function that allows us to take any input, which uses the letter n, and understand the best, worst, and average time an algorithm takes to run. In other words, big O notation is a descriptor. It allows us to look at how things could work relative to each other, which is important for understanding sorts, as they have variable inputs. Every sort is different and has a use case for which it is most effective, so you need a way to know which type best fits your data.

If we look at a sample big O notation — such as O(n^2) — n represents the input size. This formula tells us that whatever uses this algorithm, the time required will be the input squared.

Think of it this way: We need a way to communicate to other people what the complexity is, and this method helps you do so.

## How Do We Track Time Complexity?

There are three ways to note time complexity. Remember, the input is variable, so it doesn’t make sense just to track it by one run. To cover all of your bases, it’s important to note your complexity in three ways: best time, average time, and worst time.

## How to Track Time Complexity

• Best Time Complexity — Define the input for which the algorithm takes minimum time. In the best case, calculate the lower bound of an algorithm. For example, in a bubble sort, where the data is already sorted so the algorithm “knows” exactly where to go, you’ve got the best-case scenario. Here, the big O notation for best-case performance is O(n) since it knows the order of numbers and can just find it based on the order.
• Average Time Complexity — In the average case, take all random inputs and calculate the computation time for all inputs and then divide it by the total number. For example, in that same bubble sort, if you take all the searches in the algorithm and divide them by the number of searches, you’ll get the average. Note, for a bubble sort, this includes the first-labor intensive sort and subsequent ones as new data is added. For this average case performance, the big O notation is O(n^2) since it needs to add the time it takes to sort.
• Worst Time Complexity — Define the input for which the algorithm takes a long time or maximum time. In the worst case, calculate the upper bound of an algorithm. For example, in a bubble sort, this is often the first search, as the algorithm has to order all the data before it can even search. Here, the big O notation is O(n^2) because the time complexity is including the amount of time it takes to sort.

More in Software EngineeringDifferences Between SOQL and SQL Explained

## Understand and Track Time Complexity

You’re very likely to get interview questions about time complexity, but there’s a reason for that. Search is hard, and takes a lot of computational resources to maintain a healthy experience for everyone. Think about it this way: How often would you use Google if you didn’t know how long a search was going to take?

Then think about this question from the other side. What if the team were unable to allocate the right resources to the function so that one search didn’t hang up the rest of the experience for everyone?

Understanding time complexity and how to notate with big O is important for solving these kinds of problems.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.