Let’s talk Big O Notation

An intro to time complexity and why it matters

Kevin Glasgow
6 min readJun 26, 2021

Just before or after learning about algorithms, I am sure that you have at least heard mentions to take time complexity or Big O notation into consideration. Especially when dealing with very large or complicated data sets. But what is time complexity and Big O, and why should I care?

What is time complexity?

Each and every algorithm that we create takes a certain amount of time to run or complete. Pretty obvious right? But looping through an array of 10 items will complete a whole lot faster then say an array of 10,000 items. This is our simple introduction to computational time complexity, the number of operations needed based on the size of the input. Think about how many more times our algorithm would need to loop through the larger array.

But what if we also needed to store a running sum of all those values? This is where computational space complexity comes into play. Whether it be simply storing the sum of all the values or even creating a new array with modified values that ‘new’ data needs somewhere to be stored until the algorithm has completed. Computational space complexity refers to the amount of storage or memory needed for each instance of the loop, temporary variables, etc. But how do we measure it?

What is Big O Notation?

Well to put it simply, we use Big O Notation as our way to measure how efficient our algorithm will run. In other words, it represents the algorithms time complexity as stated above. And more complexly, is a mathematical notation that illustrates the time taken to complete the algorithm in relation to the number of operations to completion. That is a lot to take in and can best be represented by this graph.

Big O Cheat Sheet

Since we are software developers who want to write quality code, we should always be taking Big O into consideration when writing our algorithms. As the size of our data increases, we want to keep our complexities, or operations, as low as possible to be as efficient as possible. It is worth noting that we would ideally avoid staying out of the red zone, if possible. Let’s jump right into it and take a look at the different time complexities, starting with best case scenario.

Constant Time Complexity O(1)

Constant time, represented as O(1), will always run at a constant time no matter the size of the data set. Let’s take a look at the example below.

Returns

As you can see, we have called the same algorithm on each of our arrays and received the same runtime as well as the same answer. Our runtime remained the same even though we have increased the size of our data set.

Logarithmic Time Complexity O(log n)

Logarithmic time, represented as O(log n), will run in proportion to the logarithm of the input size. Let’s take a look at the example below.

Returns

As you can see, we have called the same algorithm on each of our arrays and although we received the same answer. Although it is clear that our runtimes are different, due to the size of the input. If we find where the number is located we return its location. Should it be lower than our middle, our middle minus one now becomes the end to help narrow down the list. The same goes for should it be greater than the middle, our middle plus one become our start. This is done until a match is found, if any.

Linear Time Complexity O(n)

Linear time, represented as O(n), will run in proportion to the input hence why we consider it linear time. Let’s take a look at the example below.

Returns

Many lines of code later…

As you can see, we have called the same algorithm again on each of our arrays. The runtime on each is clearly different as we have two different lengths but still need to complete the same objective. As the data set increases the amount of runtime increases just due to the sheer amount of data that needs to be processed through the equation in this case.

Quadratic Time Complexity O(n²)

Quadratic time, represented as , will grow exponentially in respect to the data size. The data is squared in this specific case and also has the possibility of even being cubed, but we wont go there. Lets take a look at the example below.

Returns

As you can see, we have called the same algorithm on each of our arrays. This one takes into account the number of operations per loop, and we have two of them. When you usually see a loop within another loop, it will be an obvious indicator that you are in quadratic time. Also by counting the number of operations we are able to see how much more one look can do runtimes given the data size.

Exponential Time Complexity O(2n) or O(2^n)

Exponential time, represented as , will as you guessed it grow exponentially over time. It doubles the amount of operations needed to finish when there is an addition to the data set. Take a look at the example blow.

Returns

As you can see, we have called the classic Fibonacci sequence algorithm on our four different numbers. I would rather not get into the complexities of it but is the sum of the two preceding numbers, starting from 0 and 1. What is import though is taking a look at the difference in runtimes just by increasing each number by 10. You could say that they grew exponentially…

Factorial Time Complexity O(n!)

Factorial time, represent as O(n!), possibly the worst runtime you could have on your algorithm as it will most likely crash depending on the size of the data being given. The product of all integers less than or equal to n, determines our runtime. This is truly a horrible runtime and should be avoided at all costs as the runtime could be infinite!

I would like to thank you for reading along and hope that you have learned something new! Keep an eye out for more articles the future!

If you would like to view a copy of these examples to play around with for yourself or just for reference, you can find the link for the GitHub Repo here.

--

--

Kevin Glasgow

Full Stack Software Engineer | Passion for Frontend and Design | Dirt Rider & Snow Glider | www.linkedin.com/in/kevin-glasgow/