Big O notation

Prince Anyaoha
5 min readJan 31, 2022

Big O notation is something that comes up a lot in software engineering. If you do not know why, you would in a few minutes. It's a big part of data structures and algorithms which is an important part of computer science. I like to argue that everything in a programming language is either a data structure or an algorithm (we can argue in the comments 🙂).

Big O notation is an algebraic notation that describes the relationship between the cost of a function (usually in terms of time and space) and its argument(s). Alternatively, we can say it is an algebraic notation that describes the complexity of a function. The complexity of a function is simply the rate of change of the cost of a function with respect to the magnitude of its arguments. The two most common types of complexities are time complexity and space complexity, in the case of time, the cost would be the time the function takes to run and in the case of space, the cost would be the space the function takes up to store variables and constants. There’s some high school math here so buckle up 🙂.

Let's say we have a function, and we want to attach to it a global value that describes its time complexity, we cannot do this using seconds or milliseconds because this would vary a lot for the exact same code, the actual time a function takes to run can be affected by the specifications of the computer, memory state and a lot of other factors. If you try using performance from the window object to calculate the milliseconds it took a function to run, you might find that running it repeatedly gives different results even on the same computer, also the time-evaluating code is also a function, which makes things a lot more tricky. Using the Big O notation, we can calculate the time complexity by evaluating the code without even needing to run it, we do this by considering the processes or simple operations involved in running the function.

Let's say we have a function

This function has one simple operation and that is the addition between the parameters a and b, so no matter how big the numbers a and b are, we only run one operation. The number of operations in the function is constant no matter the magnitude of the arguments, the big O notation for this is O(1) which basically means constant time (remember the ‘time’ here is a function of the operations in the function, not seconds), the O in the notation is just to show that this is a big O notation, the value of concern is the what is inside the parenthesis. We can describe this by saying the number of operations in the function is multiplied by 1 as the magnitude of the argument increases or decreases.

Let’s say we have another function that is a bit more interesting

This function takes an argument and returns the sum of all integers from 0 to that number, evaluating this function, we can say the first line contains one operation which is the assignment of the variable sum, the second line contains about 4 operations, the assignment of the variable i , the comparison between i and n and the incrementation, the incrementation involves two operations, an assignment, and an addition. i++ just means i = i + 1 and the third line contains two operations ( sum += i). But some of these operations might happen more than once depending on what n is. The comparison operation in the for loop, the incrementation of i in the for loop and the incrementation of sum also in the for loop runs n times so if n happened to be 2, we would have a total of 12 operations, 5 in the loop that would run twice and 2 outside the loop that always runs once no matter the value of n but the number of these operations does not really matter. we just want to know how changing the argument affects it so we can say we have (o * n) + 2 operations in the function where o is the number of operations in the function that is repeated (in this case, the ones in the for loop) and n is the argument which is also the number of times the for loop would run. So, for every value of n the operations in the for loop would run n times and the ones outside the for loop would run once which means they are constant. In calculating the complexity of a function, we do not consider constants because, in the grand scheme of things, they are negligible. One way to think about this is to consider the significance of the constants as the argument keeps getting bigger and bigger, for example, let's say the argument is a million, we would have five million operations due to the loop, it would not matter if we counted the 2 outside cause its negligible. The time complexity of this function would be written as O(n) which means linear time. We can describe this by saying the number of operations changes linearly as the magnitude of the argument changes or the number of operations in the function has a linear relationship with the magnitude of the argument

Another example of a function that has linear time is

This function takes a number as an argument and logs the integers from 0 to that number twice. It has two for loops but what really matters is how the argument affect the number of times those run so for every value of n the code in the for loops run n times meaning its time complexity is O(n).

Let’s take another example

This function like its name suggests takes an array as an argument and logs every possible combination of each value in the array in pairs. There are two loops, one nested in the other. The magnitude of the argument, in this case, is the length of the array because it determines how many times each of the loops would run. Let’s take some practical examples, if the length of arr1 is 1, the inner loop would run once, if it were 2, it would run 4 times (two times twice), if it were 3, it would run 9 times, do you notice a pattern here? The code in the inner loop which is basically the only operation in the function runs times. This means for every array passed, the number of operations in the function is multiplied by the square of the length of the array. The time complexity is O(n²), this is called polynomial time.

Functions can also have logarithmic expressions as their time complexity like O(log n), or O(n log n), we would talk about this in a second part alongside space complexity and the complexity of some tricky functions and computer science algorithms as this is getting long. The second part is here, I write on software engineering so you can follow for that. Happy coding 🙂.

--

--