in Coding

Big O Notation

Something I hear about frequently as a developer, but have never really looked into too deeply is “Big O Notation”. It’s a way of talking about the performance of an algorithm – a common language between developers for expressing just how ‘expensive’ a calculation is. The idea is to break the equation down into one of the following categories:

O(1) – an algorithm that will execute in the same time regardless of the data set passed in.

An example is returning the first item in a list. It doesn’t matter if the list is one item long or a million – it will take the same amount of time regardless.

O(N) – an algorithm where the execution time will grow directly in proportion to the size of the data set passed in. i.e. if you pass in a string which is twice as long, the algorithm will take twice as long to run.

An example is iterating through a list, checking each item for a particular value. If you pass in a list that is twice as long, you might reasonably expect the search to take twice as long also.

O(N^2) – an algorithm where the execution time will grow in proportion to the square of the size of the data set. i.e. if you double the length of the string you pass in, the algorithm will take four times as long. Treble the length, the algorithm takes 8 times as long to run.

An example is iterating through each item in a list, and comparing that item with each other item in the list (nested for loops for example). If the list is twice as long, you might reasonably expect this to take four times as long.

O(2^N) – an algorithm where the execution time will double with every additional element in the input string. This is an exponential growth and is impractical for any

O(Log N) – A little bit harder to visual, but consider an algorithm that ‘divides-and-conquers’ the input. By this I mean that it typically throws away half the input after each iteration – for example a binary search. This type of search on a sorted array first looks at the median value – if the median value equals the searched for value, the search ends. Otherwise if the searched value is above the median value, the bottom half of the array is thrown away. If the searched for value is below the median value, the top half of the array is thrown away. The search then repeats on the smaller array.

The time for this type of algorithm rises more slowly as the size of the input increases. For example an input of 10 items might take 1 second, 100 items might take 2 seconds, 1000 might take 3 seconds.

So how do you determine which of these categories a particular algorithm fits? This is actually a fairly difficult skill depending on the complexity of the algorithm. It involves the developer making decisions about which bits of the code can be ignored when considering how the algorithm scales – for example if an algorithm begins with a costly setup before entering loop with O(N^2) complexity, the cost of the setup might be large for small input sets, but negligible for large inputs. This is more information on determing complexities here: http://pages.cs.wisc.edu/~vernon/cs367/notes/3.COMPLEXITY.html

Why is any of this useful? It helps us determine whether a particular algorithm will scale well. Just because a algorithm works quickly with small datasets, it is no guarantee that it will scale effectively. Here is an indication of how different algorithm complexities will scale (taken from a really nice article on the subject: http://apelbaum.wordpress.com/2011/05/05/big-o/):

An O(1) algorithm scales better than an O(log N),
which scales better than an O(N),
which scales better than an O(N log N),
which scales better than an O(N2),
which scales better than an O(2N).

Write a Comment

Comment