In **Computer Science**, the so called “Big O notation” is in short used to express the working space requirements and processing time of a given algorithm. If you want to explore a little bit about this subject, who is behind it and its history, Wikipedia has a rich article about this. But we are going to concentrate at the practical use of it for common situations, in example, interviews or every-day situations.

So the “O” is a function, and a function in mathematics language is represented as:

So you may bump with this notation as where is just the simplest form as possible, omitting constants and other factors.

So, in interviews and daily algorithm problems you may find enough just to use: where is the value we are going to represent our processing or working space.

When you analyze an algorithm, you are analyzing its efficiency, and that means you want to find: The best case scenario and the worst case scenario for performance and the working space (literally how much RAM this algorithm will need to store the values), we will deal with working space later on, but in this article we will just focus on processing time.

### Puzzled? Keep reading!

We don’t need to go deeper then this, so let’s take a breath and look this situation in another perspective. You are a computer scientist and you need to analyze a simple algorithm. In general we analyze the worst case scenario, which means that your algorithm will run all the cycles to complete its operation.

**Base case**

[java]

public void printNumber(int n)

{

System.out.println(n);

}

[/java]

This code has no loops or recursive calls, just a simple operation. When he have this case, we can say that “printNumber” is , a **constant** algorithm.

**Let’s pick another example:**

[java]

public void printNumbers(int n)

{

for (int I = 0; I < n; I++)

{

System.out.println(I);

}

}

[/java]

The complexity of this algorithm is **linear**, because it increase at the same ratio of the input data. So:

**Now let’s see with a nested loop**

[java]

public void printNumbers(int a)

{

for (int I = 0; I < a; I++)

{

for (int J = 0; J < a; J++)

{

System.out.println(I);

}

}

}

[/java]

The outer loop will run “n” times, but each outer loop cycle will make the inner loop run “n” times as well. Since the inner loop is nested, we say it have **n * n** or .

So this second case will be , a **quadratic** function.

**Logarithmic functions**

[java]

public void printNumbers(int n)

{

for (int I = 0; I < n; I = I * 2)

{

System.out.println(I);

}

}

[/java]

The word Logarithm sounds scary when we are not used to it, hold your breath and take a look in this quick post about Logarithms before you continue.

Now you know what a **log(n)** means, the code above runs in time. This is because the “I” is been incremented by 2 each time.

**Mix of logarithmic functions and linear are also possible**

[java]

public void printNumbers(int n)

{

// Linear “n”

for (int I = 0; I < n; I++)

{

// Logarithm “log(n)”

for (int J = 0; J < n; J = J + 2)

{

System.out.println(I);

}

}

}

[/java]

It’s not hard to guess that this will be:

**Do not confuse!**

[java]

public void printNumbers(int n)

{

// Linear “n”

for (int I = 0; I < n; I++)

{

// Constant

for (int J = 0; J < 5; J++)

{

System.out.println(I);

}

}

}

[/java]

At the first glance, you would guess that this function would be because of the second nested loop, that runs 5 times every time, but the “Big O” only takes into consideration the simplification of the situation, we must drop every constant we got. So in this case, our Big O stills .

## Famous Big O values

From now on, you should have a notion of famous Big O values for common algorithms, this will be useful when you write your own and need an idea about where it will fit.

Big O | Famous Algorithms |
---|---|

Hash Table, Table Lookup | |

Heap Sort, Quick Sort, Merge Sort | |

Bubble Sort, Quick Sort, Selection Sort, Insertion Sort, Hannoi Tower, Fibonacci | |

Salesman Problem | |

Maximum Matching for Bipartite Graphs | |

Binary Search |