When we develop an algorithm, we need to analyze its performance. We need to compare the efficiency of different solutions to a problem.
There are two important criteria to analyze the performance
1. Time – The runtime of the program should be faster
2. Space (Memory) – The space taken by the program should be lesser
These two criteria are crucial for programmers to ensure that their applications run properly and efficiently. This is where Big O Notation enters the picture. So let’s us learn more about it.

Asymptotic Notations and Why Big O?
Functions can be in various forms. It can be polynomial, exponential, trigonometric, integration, etc. We need to simplify these functions so that we can understand the function in its simplest form. So to represent the working of a function in its simplest form, we use asymptotic notations.
There are three asymptotic notations:-
1) Big O notation – represents upper bound of a function
2) Omega notation – represents lower bound of a function
3) Theta notation – represents average bound of a function
In mathematics, we assume that we cannot simplify each function into simplest form. We need to take approximate, so upper bound or lower bound is used. If we can find exact simplification and results, we go for the average bound.
But the thing is that, we can simplify most of the algorithms, so we can say average bound should be used. But the trend is to use the Big O notation to represent the complexity of an algorithm since we assume that not all algorithms can be simplified to get exact results.
What is Big O notation?
The Big O notation is a way of describing how long it takes for an algorithm to run. In other words, it is a way of comparing the efficiency of different solutions to a problem.
With Big O notation, we express the runtime in terms of how quickly it grows relative to the input, as the input gets progressively larger.

What is Time and Space Complexity?
It’s hard to get the exact runtime of an algorithm and express it in seconds. It depends on the speed of the processor, what else the computer is running, etc. So, we measure how quickly the runtime grows.
Big O notation uses the size of the input, which we call “n”. So we can say things like the runtime grows “on the order of the size of the input” O(n) or “on the order of the square of the size of the input” O(n^2). This is time complexity.
Similarly, space complexity specifies the total amount of space or memory required to execute an algorithm as a function of the size of the input.
Space can be constant or variable. Constant space is the fixed amount of space occupied by the algorithm. Variable space is when an algorithm during its execution is consuming an extra memory.
Complexity Comparison of Big O Notation
The fastest and ideal running time for any algorithm is constant time or O(1).This is the ideal runtime for an algorithm which is rarely achieved.
In actual cases, the runtime of an algorithm depends on n, that is the size of the input or the number of operations is required for each input item.
Here is the order from best to worst time complexity of an algorithm.

Rules of Big O notation
Drop the constants
Suppose we have O(2n) or O(n/2+10), we will drop the constants or numeric terms and just call it O(n).
Drop the less significant terms
When we have something like this O(n3+2n2+1) ,we will write only the highest order and drop all the less significant terms. So it will be O(n3).
As of now, this seems abstract. Let’s look at a few examples on how to find complexity.
Constant Time Complexity: O(1)
In constant running time, the algorithm always takes the same amount of time to execute, regardless of the input size. Let’s analyze a program to add two variables.

We assume that each statement of the program takes 1 unit of time. So, the time complexity for this program will be O(1).

Let’s look at another example, in this program there are three statements and each statement takes one unit of time, thus the program takes 3 units of time. So the time complexity will be O(3) which can be written as O(1) i.e. constant time.

Now let’s find out the space taken by the program. There are three variables used a, b and temp and each of them consumes 1 word memory. A word is the measurement unit for memory space. We are using fixed amount of space , thus it will be constant space complexity that is O(1).

Linear Time Complexity: O(n)
Let’s use an example of a program, that is printing all the items of a list. Here, ‘n’ is the number of items or length of the list. As a starting point, we assume each statement takes 1 unit of time.

But if you observe the statement inside the for loop. The statement won’t be executed only for one time. Though it takes 1 unit of time, the same statement is executed repeatedly for n number of times. And, the for loop will run for n+1 times, since the moment i = n+1, that is when the loop will end.

Since there are are n items in the list, this function runs in O(n) time (or “linear time“).If the list has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.
The space taken will be constant O(1), since we are not using extra data structures( linked list, hash map, etc.).
Quadratic Time Complexity: O(n²)
Below, we have a program to add two matrices. The time taken by the first for loop will be n+1. Now ,whatever is there inside the first for loop, we will write n times. Since it is nested, we will multiply each statement with the runtime of its outer for loop. This will give us time complexity of O(n²).

The Space Complexity will be O(n*n), as we are using an extra space result matrix (C) for n rows and n columns.
Cubic Time Complexity: O(n³)
We have a program to multiply two matrices. The time taken by the first for loop will be n+1. Now ,whatever is there inside the first for loop, we will write n times. Since it is nested, we will multiply each statement with the runtime of its outer for loop. This will give us time complexity of O(n³)
The Space Complexity will be O(m*n), as we are using an extra space result matrix (C) for m rows and n columns.

Square Root Time Complexity: O(√n)
We have a program to check whether a number is Prime or not. Observe the for loop, the for loop will iterate through all numbers from 2 to square root of n. Hence the time complexity will be O(√n).

The space complexity will be O(1).
One more example where the time complexity will be O(√n).
i = 0
while(i*i < n):
i += 1
Logarithmic Time Complexity: O(log n)
We have the binary search program as an example.

When using while loop , at each stage of the search we will split the input in half, so we successively reduce the size of the problem.
n, n/2,n/4,n/8…n/2k
Clearly we will be done when the input is 1, for there’s just one place. So, we want the number of steps k such that n/2k ≤ 1
. So, the time complexity will be O(log n).
Space complexity is O(1) for the iterative approach. This is because we need two variables to keep track of the range of elements that are to be checked. No other data is needed.
If the binary search is using recursive approach, then space complexity will be O(log n) because of recursive calls and all these recursive calls will be stacked in memory.

Few more examples when the for loop will be of logarithmic time complexity
for i in range(log n):
for i in range(1,n,i*2):
for i in range(n,-1,i/2):
Linearithmic Time Complexity: (O(n log n))
Below is the pseudocode for merge sort.

Merge sort always divides the array in two halves and take linear time to merge two halves. Time complexity of Merge Sort is O(n log n) as it divides input array in two halves, calls itself for the two halves and then merges the two sorted halves. Space complexity for Merge Sort is O(n) since all elements are copied into an auxiliary array of size N.
Exponential Time Complexity: O(2n )
When a function is recursive the time complexity will be exponential. Let’s take an example of Fibonacci program. Whenever the Fibonacci function is called, it gets broken down into two smaller subproblems. In the Fibonacci sequence shown below, we solve smaller but identical problems until we reach the base cases.

For value 5, the function makes 23 calls, therefore for nth value it will 2n number of calls.
21 + 22 + 23 +…+ 2n
Thus the time complexity will be O(2n ) and space complexity will be O(n)
Big O Complexity Comparison Graph
