The Big O – Calculations

This is a follow-up to my previous post ,where i explained a bit about the concept of Big O.

In this post I will continue and expand further showing you how to calculate the Big O.

Tedious Way of Calculating

The common way of calculating Big O is counting the number of steps the algorithm takes to complete a task.

This is a tedious exercise.Imagine if your input is 10,with that small number is easy to count the steps,what if your input is 1 million,then pit becomes impossible.

Example

void printUnOrderedPairs(int[] array)
{
for(int i=0;i < array.Length; i++)
{
for(int j=i+1; j <array.Length;j++)
{
Console.WriteLine(array[i] + "," +array[j]);
}
}
}

 

Counting the iterations

The first time through j runs for N-1 steps. The second time, it’s N-2 steps. Then N-3 steps. And so on.
Therefore, the number of steps total is:
(N-1) + (N-2) + (N-3) + … + 2 + 1

= 1 + 2 + 3 + … + N-1
= sum of 1 through N-1
The sum of 1 through N-1 is -2- (see”Sum o Integers 1 through N on, so the runtime will
be O(N2 ).

What It Means
Alternatively, we can figure out the runtime by thinking about what the code “means:’ It iterates through
each pair of values for ( i, j) where j is bigger than i.There are N2 total pairs. Roughly half of those will have i < j and the remaining half will have i > j. This
code goes through roughly Nx pairs so it does O(N2) work.
Visualizing What It Does
The code iterates through the following ( i, j) pairs when N = 8:
(0, 1) (0, 2) (0, 3) (0, 4) (0, 5) (0, 6) (0, 7)
(1, 2) (1, 3) (1, 4) (1, 5) (1, 6) (1, ‘7)
(2, 3) (2, 4) (2, 5) (2, 6) (2, 7)
(3, 4) (3, 5) (3, 6) (3, 7)
(4, 5) (4, 6) (4, 7)
(5, 6) (5, 7)
(6, 7)
This looks like half of an NxN matrix, which has size (roughly) N/i. Therefore, it takes O ( N2) time.

(Adapted from “Cracking The Coding Interview“)

The Easy Way of Calculating Big O

The easier way of calculating Big O is using a Principle called “The Multiplication rule” which states:

“The total running time of a statement inside a group of nested loops is the running time of the statements multiplied by the product of the sizes of all the for loops. We need only look at the maximum potential number of repetitions of each nested loop.”

Applying the Multiplication Rule to the above example we get:

The inner for loop potentially executes n times (the maximum value of i will be n-1) and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n * n, so the Big O Notation of the above example is expressed by O(n2)

More Examples

//Example 2
for(int i = 0; i < n*n; i++)
sum++;
for(int i = 0; i < n; i++)
for(int j = 0; j < n*n; j++)
sum++;

We consider only the number of times the sum++ expression within the loop is executed. The first sum expression is executed n2 times. While the second sum++ expression is executed n * n2 times. Thus the running time is approximately n2 + n3. As we only consider the dominating term – the Big O Notation of this fragment is expressed by O(n3).

Note that there were two sequential statements therefore we added n2 to n3. This problem illustrates another rule that one may apply when determining the Big O. You can combine sequences of statements by using the Addition rule, which states that the running time of a sequence of statements is just the maximum of the running times of each individual statement.

/Example 3
for(int i =0; i < n; i++)
for(int j = 0; j < n*n; j++)
for(int k = 0; k < j; k++)
sum++;

We note that we have three nested for loops: the innermost, the inner and the outer for loops. The innermost for loop potentially executes n*n times (the maximum value of j will be n*n-1). The inner loop executes n*n times and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n*n * n*n * n, so the Big O Notation of Example 2 is expressed by O(n5).

//Example 4
for(int i = 0; i<210; i++)
for(int j = 0; j < n; j++)
sum++;

Note that the outer loop runs 210 times while the inner loop runs n times.

Hence the running time is:210 * n. The Big O Notation of this Fragment is expressed by O(n) (ignoring constants).

One More…

//Example 5
for(int i = 1; i < n; i++)
for(int i = n; i > 1; i = i/2)
sum++;

The first loop runs n times while the second loops runs log n times. (Why? Suppose n is 32, then sum++; is executed 5 times. Note 25 = 32 therefore log32 = 5. Thus the running time is actually log2 n, but we can ignore the base.) Hence the Big O Notation of Example 5 is expressed by O(nlogn ) (Multiplication Rule.)

 

Further Reading

Malik, DS. 2003. Data structures using C++.

 A practical introduction to data structures and algorithm analysis

Introduction to Algorithms, 3rd Edition (MIT Press) 3rd Edition

Cracking the Coding Interview: 189 Programming Questions and Solutions

 

The Big O – Explained

Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a
symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.

Landau’s symbol comes from the name of the German number theoretician Edmund
Landau who invented the notation. The letter O is used because the rate of growth of a
function is also called its order.

What is The Big O ?

Big o time is the language and metric we use to describe the efficiency of algorithms.It is very important as a developer to have a good grasp of this concept as it can either break or make you as a developer.

Not understanding big O thoroughly can really hurt you in developing an algorithm. Not only might you be judged harshly for not really understanding big 0, but you will also struggle to judge when your algorithm is getting faster or slower.

Understanding Big O

How efficient is an algorithm or piece of code? Efficiency covers lots of resources,
including:

  • CPU (time) usage
  • memory usage
  • disk usage
  •  network usage

All are important but we will mostly talk about time complexity (CPU usage) and also Space complexity(memory usage).

Be careful to differentiate between:
1. Performance: how much time/memory/disk/… is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.
2. Complexity: how do the resource requirements of a program or algorithm scale,
i.e., what happens as the size of the problem being solved gets larger?

Complexity affects performance but not the other way around.
The time required by a function/procedure is proportional to the number of “basic
operations” that it performs.

Here are some examples of basic operations:

  • one arithmetic operation (e.g., +, *).
  • one assignment (e.g. x := 0)
  • one test (e.g., x = 0)
  • one read (of a primitive type: integer, float, character, boolean)
    one write (of a primitive type: integer, float, character, boolean)

Some functions/procedures perform the same number of operations every time they are
called. For example, StackSize in the Stack implementation always returns the number of
elements currently in the stack or states that the stack is empty, then we say that
StackSize takes constant time.
Other functions/ procedures may perform different numbers of operations, depending on
the value of a parameter. For example, in the BubbleSort algorithm, the number of
elements in the array, determines the number of operations performed by the algorithm.
This parameter (number of elements) is called the problem size/ input size.

Typically, we are usually interested in the worst case: what is the maximum number of
operations that might be performed for a given problem size. For example, inserting an
element into an array, we have to move the current element and all of the elements that
come after it one place to the right in the array. In the worst case, inserting at the
beginning of the array, all of the elements in the array must be moved. Therefore, in the
worst case, the time for insertion is proportional to the number of elements in the array,
and we say that the worst-case time for the insertion operation is linear in the number of
elements in the array. For a linear-time algorithm, if the problem size doubles, the
number of operations also doubles.

I will be explaining how to calculate Big O in my next post.

To be continued…

Dependency Inversion Principle(DIP)

Dependency Inversion Principle(DIP)

What Is Dependency Inversion Principle(DIP)?

In object-oriented design, the dependency inversion principle(DIP) refers to a specific form of decoupling software modules.
Dependency Inversion Principle or DIP has two key points:
1.Abstractions should not depend upon details;
2.Details should depend upon abstractions.
The principle could be rephrased as use the same level of abstraction at a given level. Interfaces should depend on other interfaces. Don’t add concrete classes to method signatures of an interface. However, use interfaces in your class methods.

Simple Example Of Dependency Inversion Principle

Consider the case of the Button object and the Door object.
The Button object senses the external environment. On receiving the Poll message, the Button object determines whether a user has “pressed” it. It doesn’t matter what the sensing mechanism is.
It could be a button icon on a GUI, a physical button being pressed by a human finger, or even a motion detector in a home security system. The Button object detects that a user has either activated or deactivated it.
The Door object affects the external environment. On receiving a TurnOn message, the Door object opens the door. On receiving a Close message, it closes that door.
How can we design a system such that the Button object controls the Door object?  The Button object receives Poll messages, determines whether the button has been pressed, and then simply sends the Open or Close message to the Door.

Consider the C# code implied by this model (above). Note that the Button class depends directly on the Door class. This dependency implies that Button will be affected by changes to Door. Moreover, it will not be possible to reuse Button to control a Motor object. In this model, Button objects control Door objects and only Door objects.

public class Button  
{  
private Door door;  
public void Poll()  
{  
if (/*some condition*/)  
door.Open();  
}  
}  
The above solution violates DIP.The abstractions have not been separated from the details.Without such a separation, the high-level policy automatically depends on the low-level modules, and the abstractions automatically depend on the details.

Finding the Underlying Abstraction

What is the high-level policy? It is the abstraction that underlies the application, the truths that do not vary when the details are changed. It is the system inside the systemit is the metaphor. In the
Button/Door example, the underlying abstraction is to detect an open/close gesture from a user and relay that gesture to a target object.
What mechanism is used to detect the user gesture? Irrelevant! What is the target object? Irrelevant!
These are details that do not impact the abstraction.
The model in above can be improved by inverting the dependency upon the Lamp object. In model below, we see that the Button now holds an association to something called a ButtonServer,which provides the interfaces that Button can use to turn something on or off. Door implements the ButtonServer interface.
Thus, Door is now doing the depending rather than being depended on.
The design above allows a Button to control any device that is willing to implement the ButtonServer interface. This gives us a great deal of flexibility. It also means that Button objects will be able to control objects that have not yet been invented.
However, this solution also puts a constraint on any object that needs to be controlled by a Button.
Such an object must implement the ButtonServer interface. This is unfortunate, because these objects may also want to be controlled by a Switch object or some kind of object other than a Button.
By inverting the direction of the dependency and making the Door do the depending instead of being depended on, we have made Door depend on a different detail: Button. Or have we?
Door certainly depends on ButtonServer, but ButtonServer does not depend on Button. Any kind of object that knows how to manipulate the ButtonServer interface will be able to control a Door .
Thus,the dependency is in name only. And we can fix that by changing the name of ButtonServer to something a bit more generic, such as SwitchableDevice. We can also ensure that Button and SwitchableDevice are kept in separate libraries, so that the use of SwitchableDevice does not imply the use of Button.
In this case, nobody owns the interface. We have the interesting situation whereby the interface can be used by lots of different clients, and implemented by lots of different servers. Thus, the interface needs to stand alone without belonging to either group. In C#, we would put it in a separate namespace and library.
Further Reading

1.Agile Software Development, Principles, Patterns, and Practices By Robert C.Martin

2. Head First Design Patterns by Freeman, Eric; Freeman, Elisabeth; Kathy, Sierra; Bert, Bates

3.Object Solutions: Managing the Object-Oriented Project by Grady Booch,

Interface Segregation Principle(ISP)

ISP Image

Interface Segregation Principle

The Interface Segregation Principle(ISP) deals with the disadvantages of “fat” interfaces.

The Interface Segregation Principle(ISP) states that no client code object should be forced to depend on methods it does not use. Basically, each code object should only implement what it needs, and not be required to implement anything else.

The ISP was first used and formulated by Robert C. Martin while consulting for Xerox.

I will tackle today’s principle using an example from MSDN

using System; 
using System.Collections.Generic; 
using System.Linq; 
using System.Text; 
 
namespace SOLIDPrinciplesDemo 
{ 
    // Only common generic methods exists for all derived classes. 
    interface IDataProvider 
    { 
        int OpenConnection(); 
        int CloseConnection(); 
    } 
    // Implement methods specific to the respective derived classe 
    interface ISqlDataProvider : IDataProvider 
    { 
        int ExecuteSqlCommand(); 
    } 
    // Implement methods specific to the respective derived classe 
    interface IOracleDataProvider : IDataProvider 
    { 
        int ExecuteOracleCommand(); 
    } 
    // Client 1 
    // Should not force SqlDataProvider client to implement ExecuteOracleCommand, as it wont required that method to be implemented. 
    // So that we will derive ISqlDataProvider not IOracleDataProvider 
    class SqlDataClient : ISqlDataProvider 
    { 
        public int OpenConnection() 
        { 
            Console.WriteLine("\nSql Connection opened successfully"); 
            return 1; 
        } 
        public int CloseConnection() 
        { 
            Console.WriteLine("Sql Connection closed successfully"); 
            return 1; 
        } 
 
        // Implemeting ISqlDataProvider, we are not forcing the client to implement IOracleDataProvider 
        public int ExecuteSqlCommand() 
        { 
            Console.WriteLine("Sql Server specific Command Executed successfully"); 
            return 1; 
        } 
    } 
    // Client 2 
    // Should not force OracleDataProvider client to implement ExecuteSqlCommand, as it wont required that method to be implemented. 
    // So that we will derive IOracleDataProvider not ISqlDataProvider 
    class OracleDataClient : IOracleDataProvider 
    { 
        public int OpenConnection() 
        { 
            Console.WriteLine("\nOracle Connection opened successfully"); 
            return 1; 
        } 
        public int CloseConnection() 
        { 
            Console.WriteLine("Oracle Connection closed successfully"); 
            return 1; 
        } 
        // Implemeting IOracleDataProvider, we are not forcing the client to implement ISqlDataProvider 
        public int ExecuteOracleCommand() 
        { 
            Console.WriteLine("Oracle specific Command Executed successfully"); 
            return 1; 
        } 
    } 
    class InterfaceSegregationPrincipleDemo 
    { 
        public static void ISPDemo() 
        { 
            Console.WriteLine("\n\nInterface Inversion Principle Demo "); 
 
            // Each client will implement their respective methods no base class forces the other client to implement the methods which dont required. 
            // From the above implementation, we are not forcing Sql client to implemnt orcale logic or Oracle client to implement sql logic. 
 
            ISqlDataProvider SqlDataProviderObject = new SqlDataClient(); 
            SqlDataProviderObject.OpenConnection(); 
            SqlDataProviderObject.ExecuteSqlCommand(); 
            SqlDataProviderObject.CloseConnection(); 
 
            IOracleDataProvider OracleDataProviderObject = new OracleDataClient(); 
            OracleDataProviderObject.OpenConnection(); 
            OracleDataProviderObject.ExecuteOracleCommand(); 
            OracleDataProviderObject.CloseConnection(); 
        } 
    } 
}

Advantages of Using ISP:

  1. Better Understandability
  2. Better Maintainability
  3. High Cohesion
  4. Low Coupling

Limitations

ISP like any other principle should be used intelligently when necessary otherwise it will result in a code containing lots of interfaces containing one method. So the decision should be taken intelligently.

Additional Resources

1.Agile Software Development, Principles, Patterns, and Practices-Robert C.Martin

2.SOLID Principles of Object Oriented Design– Steve Smith(Pluralsight)

3.Principles Of OOD

That’s it folks for this week,hope you learnt something in today’s topic,Interface Segregation Principle(ISP), the “I” in the S.O.L.I.D acronym,i will be concluding our series in the next post covering the Dependency Inversion Principle.

Liskov Substitution Principle

If you’ve been following my blog,you’ll know that we have been covering the SOLID Principles,in my previous posts i covered The Single Responsibility Principle,the “S” in the SOLID Acronym,then we covered The Open Closed Principle,today  my focus will be on The Liskov Substitution Principle,the “L” in the SOLID Principles acronym.

What is LSP?

This principle is credited to Barbara Liskov,she wrote this principle in 1988 and she said:

“What is wanted here is something like the following substitution property: If for each object o1
of type S there is an object o2 of type T such that for all programs P defined in terms of T, the
behaviour of P is unchanged when o1 is substituted for o2 then S is a sub-type of T.”

In simple terms the above statement means:

“Subtypes must be substitutable for their base types.”

or more simply:

“It means that we must make sure that new derived classes are extending the base classes without changing their behaviour” 

A Simple Example Using C#

We’ll use the classic Rectangle-Square problem to demonstrate this principle. Let’s imagine that we need to find the area of any rectangle.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLIDPrinciplesDemo
{
// If any module is using a Base class then the reference to that Base class can be replaced with a Derived class without affecting the functionality of the module.
// Or
// While implementing derived classes, one needs to ensure that, derived classes just extend the functionality of base classes without replacing the functionality of base classes.
class Rectangle
{
 protected int mWidth = 0 ;
 protected int mHeight = 0;
public virtual void SetWidth(int width)
{
mWidth = width;
}
public virtual void SetHeight(int height)
{
mHeight = height;
}
public virtual int GetArea()
{
  return mWidth * mHeight;
}
}
// While implementing derived class if one replaces the functionality of base class then,
// it might result into undesired side effects when such derived classes are used in existing program modules.
class Square : Rectangle
{
// This class modifies the base class functionality instead of extending the base class functionality
// Now below methods implementation will impact base class functionality.
public override void SetWidth(int width)
{
mWidth = width;
mHeight = width;
}
public override void SetHeight(int height)
{
mWidth = height;
mHeight = height;
}
}
class LiskovSubstitutionPrincipleDemo
{
private static Rectangle CreateInstance()
{
// As per Liskov Substitution Principle "Derived types must be completely substitutable for their base types".
  bool SomeCondition = false;
if (SomeCondition == true)
{
  return new Rectangle();
}
else
{
  return new Square();
}
}
public static void LSPDemo()
{
Console.WriteLine("\n\nLiskov Substitution Principle Demo ");
Rectangle RectangleObject = CreateInstance();
// User assumes that RectangleObject is a rectangle and (s)he is able to set the width and height as for the base class
RectangleObject.SetWidth(5);
RectangleObject.SetHeight(10);
// Now this results into the area 100 (10 * 10 ) instead of 50 (10 * 5).
Console.WriteLine("Liskov Substitution Principle has been violated and returned wrong result : " + RectangleObject.GetArea());
// So once again I repeat that sub classes should extend the functionality, sub classes functionality should not impact base class functionality.
}
}
}

I know the example above doesn’t exhaust the LSP principle,but im sure it gave you an idea what the principle stands for.

Benefits

This principle aims to keep functionality intact. The main purpose is to guarantee that objects lower in a relational hierarchy can be treated as though they are objects higher in the hierarchy. Basically, any child class should be able to do anything the parent can do.

Further Reading

1.Data Abstraction and Hierarchy,” Barbara Liskov, SIGPLAN Notices, 23(5) (May 1988)

2.Bertrand Meyer, Object-Oriented Software Construction, 2d. ed., Prentice Hall, 1997

3.Rebecca Wirfs-Brock et al. , Designing Object-Oriented Software, Prentice Hall,
1990.

4.Agile.Principles.Patterns.and Practices.In.C#[Robert.C.Martin]

That’s it for this week,next i will be covering Interface Segregation Principle(ISP),the letter “I” in the SOLID acronym.

****Happy Holidays and be safe****