Big O Notation Explained

Home » Blog » Software » Big O Notation Explained

Conrad Reeves recently published an article on Big O notation on That Computer Scientist, in which he tries to explain this – seemingly daunting – subject as easily as possible.

Big O notation is a method for ranking an algorithm’s efficiency in terms of time (e.g. CPU cycles) or space (i.e. storage or memory) requirements. In a nutshell, Big O notation allows you to make rough approximations like for example: “In a best case scenario the algorithm in question at most will consume as many CPU cycles as a hypothetical algorithm described by this function multiplied by a constant factor.”

For example, an algorithm with an efficiency of O(n²) will at most require the same time or space as f(x) = n * x² with x meaning the number of elements to be processed and n signifying an arbitrary factor.

As a maths professor once put it to me: “Big O is a way of describing something imprecise in exact terms.”

About the author: Bjoern
Independent IT consultant, entrepreneur

Leave a Comment