Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a

symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.

Landau’s symbol comes from the name of the German number theoretician Edmund

Landau who invented the notation. The letter O is used because the rate of growth of a

function is also called its order.

**What is The Big O ?**

**What is The Big O ?**

Big o time is the language and metric we use to describe the efficiency of algorithms.It is very important as a developer to have a good grasp of this concept as it can either break or make you as a developer.

Not understanding big O thoroughly can really hurt you in developing an algorithm. Not only might you be judged harshly for not really understanding big 0, but you will also struggle to judge when your algorithm is getting faster or slower.

*Understanding** Big O*

*Understanding*

*Big O*

How efficient is an algorithm or piece of code? Efficiency covers lots of resources,

including:

- CPU (time) usage
- memory usage
- disk usage
- network usage

All are important but we will mostly talk about time complexity (CPU usage) and also Space complexity(memory usage).

Be careful to differentiate between:

1. **Performance**: how much time/memory/disk/… is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.

2. **Complexity**: how do the resource requirements of a program or algorithm scale,

i.e., what happens as the size of the problem being solved gets larger?

Complexity affects performance but not the other way around.

The time required by a function/procedure is proportional to the number of “basic

operations” that it performs.

Here are some examples of basic operations:

- one arithmetic operation (e.g., +, *).
- one assignment (e.g. x := 0)
- one test (e.g., x = 0)
- one read (of a primitive type: integer, float, character, boolean)

one write (of a primitive type: integer, float, character, boolean)

Some functions/procedures perform the same number of operations every time they are

called. For example, StackSize in the Stack implementation always returns the number of

elements currently in the stack or states that the stack is empty, then we say that

StackSize takes constant time.

Other functions/ procedures may perform different numbers of operations, depending on

the value of a parameter. For example, in the BubbleSort algorithm, the number of

elements in the array, determines the number of operations performed by the algorithm.

This parameter (number of elements) is called the problem size/ input size.

Typically, we are usually interested in the **worst case**: what is the maximum number of

operations that might be performed for a given problem size. For example, inserting an

element into an array, we have to move the current element and all of the elements that

come after it one place to the right in the array. In the worst case, inserting at the

beginning of the array, all of the elements in the array must be moved. Therefore, in the

worst case, the time for insertion is proportional to the number of elements in the array,

and we say that the worst-case time for the insertion operation is linear in the number of

elements in the array. For a linear-time algorithm, if the problem size doubles, the

number of operations also doubles.

I will be explaining how to calculate Big O in my next post.

To be continued…

## One Comment