How to Grow As A Self-taught Software Developer

With such a high demand for software developers,comes with the rise in self-taught developers.

Most self-taught developers feel inadequate,compared to  the graduate developers .

So today i will point out a few things you can do to go past your inadequacies and be  a strong developer.

1.Have a Growth Plan

You can’t hit a target you cannot see, and you cannot see a target you do not have.

— Zig Ziglar

To grow you need a plan.

Know who you are  and your limitations and then develop a growth plan,with clear goals,action plans and time frames.

2.Master Computer Science(CS) concepts.

Mastering those makes the developer see the big picture of why we code the way we do and be efficient programmers.

Master concepts like data-structures,algorithms and recursion.

Most self-taught developers struggle with such concepts which stoops them from writing elegant code,which is easy to read and runs faster.

3.Read Documentation 

Most of us developers learn through channels like YouTube,stack overflow.To truly understand your tech stack,read documentation.

Documentation is not as easy to understand, but remember with any growth it’s hard before it becomes easy,keep at it eventually you will get the hang of it and then start reaping the rewards.

4.Find A Code Mentor

Every master was once a protege.To perfect your craft you need a mentor.

Your mentor doesn’t need to be a physical person you meet like everyday.You can follow writings by respected authorities in your field e.g Robert C. “Uncle Bob” Martin.

You can even ask your immediate supervisor to mentor you.

A great way to find mentors is by attending meetups in your area.

5.Embrace the Impostor Syndrome 

I know for most self-taught developers it feels like you in over your head and feel like an impostor,because at the back of your head you fill like you not qualified because of lack of formal qualifications.

Instead embrace that feeling and let it propel you to learn more and be a better developer.

6.Deliberately Practice

Imagine Cristiano Ronaldo practicing once a week or whenever he feels like.Do you think he will be playing at the level he is currently playing?

To be world-class you have to be willing to put in the time,practice those algorithms,puzzles and implementing those data-structures everyday and be deliberate about it.

It wont serve you to be doing the same stuff,playing at the same level,you don’t grow.

Use sites like Topcoder and hackerearth to practice and compete.

7.Teach Others

To grasp any concept,teach it to another person,especially a non-technical person.If you can rephrase it in layman terms,sure you have mastered it.Utilize local meetups,sign up to give a talk there.

If you liked the post be sure to like,share and subscribe to the blog 

 

The Big O – Calculations

This is a follow-up to my previous post ,where i explained a bit about the concept of Big O.

In this post I will continue and expand further showing you how to calculate the Big O.

Tedious Way of Calculating

The common way of calculating Big O is counting the number of steps the algorithm takes to complete a task.

This is a tedious exercise.Imagine if your input is 10,with that small number is easy to count the steps,what if your input is 1 million,then pit becomes impossible.

Example

void printUnOrderedPairs(int[] array)
{
for(int i=0;i < array.Length; i++)
{
for(int j=i+1; j <array.Length;j++)
{
Console.WriteLine(array[i] + "," +array[j]);
}
}
}

 

Counting the iterations

The first time through j runs for N-1 steps. The second time, it’s N-2 steps. Then N-3 steps. And so on.
Therefore, the number of steps total is:
(N-1) + (N-2) + (N-3) + … + 2 + 1

= 1 + 2 + 3 + … + N-1
= sum of 1 through N-1
The sum of 1 through N-1 is -2- (see”Sum o Integers 1 through N on, so the runtime will
be O(N2 ).

What It Means
Alternatively, we can figure out the runtime by thinking about what the code “means:’ It iterates through
each pair of values for ( i, j) where j is bigger than i.There are N2 total pairs. Roughly half of those will have i < j and the remaining half will have i > j. This
code goes through roughly Nx pairs so it does O(N2) work.
Visualizing What It Does
The code iterates through the following ( i, j) pairs when N = 8:
(0, 1) (0, 2) (0, 3) (0, 4) (0, 5) (0, 6) (0, 7)
(1, 2) (1, 3) (1, 4) (1, 5) (1, 6) (1, ‘7)
(2, 3) (2, 4) (2, 5) (2, 6) (2, 7)
(3, 4) (3, 5) (3, 6) (3, 7)
(4, 5) (4, 6) (4, 7)
(5, 6) (5, 7)
(6, 7)
This looks like half of an NxN matrix, which has size (roughly) N/i. Therefore, it takes O ( N2) time.

(Adapted from “Cracking The Coding Interview“)

The Easy Way of Calculating Big O

The easier way of calculating Big O is using a Principle called “The Multiplication rule” which states:

“The total running time of a statement inside a group of nested loops is the running time of the statements multiplied by the product of the sizes of all the for loops. We need only look at the maximum potential number of repetitions of each nested loop.”

Applying the Multiplication Rule to the above example we get:

The inner for loop potentially executes n times (the maximum value of i will be n-1) and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n * n, so the Big O Notation of the above example is expressed by O(n2)

More Examples

//Example 2
for(int i = 0; i < n*n; i++)
sum++;
for(int i = 0; i < n; i++)
for(int j = 0; j < n*n; j++)
sum++;

We consider only the number of times the sum++ expression within the loop is executed. The first sum expression is executed n2 times. While the second sum++ expression is executed n * n2 times. Thus the running time is approximately n2 + n3. As we only consider the dominating term – the Big O Notation of this fragment is expressed by O(n3).

Note that there were two sequential statements therefore we added n2 to n3. This problem illustrates another rule that one may apply when determining the Big O. You can combine sequences of statements by using the Addition rule, which states that the running time of a sequence of statements is just the maximum of the running times of each individual statement.

/Example 3
for(int i =0; i < n; i++)
for(int j = 0; j < n*n; j++)
for(int k = 0; k < j; k++)
sum++;

We note that we have three nested for loops: the innermost, the inner and the outer for loops. The innermost for loop potentially executes n*n times (the maximum value of j will be n*n-1). The inner loop executes n*n times and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n*n * n*n * n, so the Big O Notation of Example 2 is expressed by O(n5).

//Example 4
for(int i = 0; i<210; i++)
for(int j = 0; j < n; j++)
sum++;

Note that the outer loop runs 210 times while the inner loop runs n times.

Hence the running time is:210 * n. The Big O Notation of this Fragment is expressed by O(n) (ignoring constants).

One More…

//Example 5
for(int i = 1; i < n; i++)
for(int i = n; i > 1; i = i/2)
sum++;

The first loop runs n times while the second loops runs log n times. (Why? Suppose n is 32, then sum++; is executed 5 times. Note 25 = 32 therefore log32 = 5. Thus the running time is actually log2 n, but we can ignore the base.) Hence the Big O Notation of Example 5 is expressed by O(nlogn ) (Multiplication Rule.)

 

Further Reading

Malik, DS. 2003. Data structures using C++.

 A practical introduction to data structures and algorithm analysis

Introduction to Algorithms, 3rd Edition (MIT Press) 3rd Edition

Cracking the Coding Interview: 189 Programming Questions and Solutions

 

The Big O – Explained

Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a
symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.

Landau’s symbol comes from the name of the German number theoretician Edmund
Landau who invented the notation. The letter O is used because the rate of growth of a
function is also called its order.

What is The Big O ?

Big o time is the language and metric we use to describe the efficiency of algorithms.It is very important as a developer to have a good grasp of this concept as it can either break or make you as a developer.

Not understanding big O thoroughly can really hurt you in developing an algorithm. Not only might you be judged harshly for not really understanding big 0, but you will also struggle to judge when your algorithm is getting faster or slower.

Understanding Big O

How efficient is an algorithm or piece of code? Efficiency covers lots of resources,
including:

  • CPU (time) usage
  • memory usage
  • disk usage
  •  network usage

All are important but we will mostly talk about time complexity (CPU usage) and also Space complexity(memory usage).

Be careful to differentiate between:
1. Performance: how much time/memory/disk/… is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.
2. Complexity: how do the resource requirements of a program or algorithm scale,
i.e., what happens as the size of the problem being solved gets larger?

Complexity affects performance but not the other way around.
The time required by a function/procedure is proportional to the number of “basic
operations” that it performs.

Here are some examples of basic operations:

  • one arithmetic operation (e.g., +, *).
  • one assignment (e.g. x := 0)
  • one test (e.g., x = 0)
  • one read (of a primitive type: integer, float, character, boolean)
    one write (of a primitive type: integer, float, character, boolean)

Some functions/procedures perform the same number of operations every time they are
called. For example, StackSize in the Stack implementation always returns the number of
elements currently in the stack or states that the stack is empty, then we say that
StackSize takes constant time.
Other functions/ procedures may perform different numbers of operations, depending on
the value of a parameter. For example, in the BubbleSort algorithm, the number of
elements in the array, determines the number of operations performed by the algorithm.
This parameter (number of elements) is called the problem size/ input size.

Typically, we are usually interested in the worst case: what is the maximum number of
operations that might be performed for a given problem size. For example, inserting an
element into an array, we have to move the current element and all of the elements that
come after it one place to the right in the array. In the worst case, inserting at the
beginning of the array, all of the elements in the array must be moved. Therefore, in the
worst case, the time for insertion is proportional to the number of elements in the array,
and we say that the worst-case time for the insertion operation is linear in the number of
elements in the array. For a linear-time algorithm, if the problem size doubles, the
number of operations also doubles.

I will be explaining how to calculate Big O in my next post.

To be continued…

SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC) Pt 2

 

Feasibility study

This post is a continuation of my previous article.

2.Feasibility Study

Feasibility study is an assessment of the practicality of a proposed project or system. In simple terms, a feasibility study involves taking a judgment call on whether a project is doable. The  criteria to judge feasibility are cost required and value to be delivered.

A feasibility study evaluates the project’s potential for success. The perceived objectivity is an important factor in the credibility of the study for potential investors and lending institutions. [Source: Wikipedia]

There are some aspects that the feasibility study tackles that as developers we normally forget that they exist. One example is the legal implications of creating and deploying a system.

Five Areas of Project Feasibility:

  1. Technical Feasibility : Assessment is centered on the technical resources available to the organization. It helps organizations asses if the technical resources meet capacity and whether the technical team is capable of converting the ideas into working systems. Technical feasibility also involves evaluation of the hardware and the software requirements of the proposed system.
  2. Economic Feasibility  : It helps organizations assess the viability, cost, and benefits associated with projects before financial resources are allocated. It also serves as an independent project assessment, and enhances project credibility, as a result. It helps decision-makers determine the positive economic benefits to the organization that the proposed system will provide, and helps quantify them. This assessment typically involves a cost/ benefits analysis of the project.
  3. Legal Feasibility – investigates if the proposed system conflicts with legal requirements like data protection acts or social media laws.
  4. Operational Feasibility : It involves undertaking a study to analyze and determine whether your business needs can be fulfilled by using the proposed solution. It also measures how well the proposed system solves problems and takes advantage of the opportunities identified during scope definition. Operational feasibility studies also analyze how the project plan satisfies the requirements identified in the requirements analysis phase of system development. To ensure success, desired operational outcomes must inform and guide design and development. These include such design-dependent parameters such as reliability, maintainability, supportability, usability, disposability, sustainability, affordability, and others.
  5. Scheduling Feasibility  : It is the most important for project success. A project will fail if not completed on time. In scheduling feasibility, we estimate how much time the system will take to complete, and with our technical skills we need to estimate the period to complete the project using various methods of estimation.[Source: Simplilearn]

Benefits of Conducting a Feasibility Study

  1. Saves money on development hours that may be lost developing a product that may end up not making it to the market.
  2. Gives project teams more focus and provides an alternative outline.
  3. Narrows the business alternatives.
  4. Identifies a valid reason to undertake the project.
  5. Enhances the success rate by evaluating multiple parameters.
  6. Aids decision-making on the project.

Next will cover system analysis and design.

 

 

 

 

 

 

 

 

Software Development Life Cycle(SDLC)

Development-Life-Cycle2

What Is A Software Development Life Cycle (SDLC)?

The Software Development Life Cycle, SDLC for short, is a well-defined, structured sequence of stages in software engineering to develop the intended software product.A framework that describes the activities performed at each stage of a software development project.

SDLC has a series of steps to be followed to design and develop a software product efficiently. The framework includes the following steps:

  1. Requirements Gathering
  2. Feasibility study
  3. System analysis
  4. Software design
  5. Coding
  6. Testing
  7. Integration
  8. Implementation
  9. Operation and Maintenance
  10. Disposition

 

1.Requirements Gathering

Requirements gathering is an essential part of any project and project management. Understanding fully what a project will deliver is critical to its success. Requirements gathering sounds like common sense, but surprisingly, it’s an area that is given far too little attention.

Many projects start with the barest headline list of requirements, only to find later the customer’s needs have not been adequately understood.

*10 Rules for Successful Requirements Gathering

To be successful in requirements gathering and to give your project an increased likelihood of success, follow these rules:

  1. Don’t assume you know what the customer wants – always ask.
  2. Involve the users from the start.
  3. Define and agree on the scope of the project.
  4. Ensure that the requirements are SMART: Specific, Measurable, Agreed upon, Realistic and Time-based.
  5. Gain clarity if there is any doubt.
  6. Create a clear, concise and thorough requirements document and share it with the customer.
  7. Confirm your understanding of the requirements alongside the customer (play them back).
  8. Avoid talking technology or solutions until the requirements are fully understood.
  9. Get the requirements agreed with the stakeholders before the project starts.
  10. Create a prototype, if necessary, to confirm or refine the customer’s requirements.

Common Mistakes

Be careful to avoid making these mistakes:

  • Basing a solution on complex or cutting-edge technology and then discovering that it cannot easily be rolled out in the ‘real world’.
  • Not prioritizing the requirements, for example, ‘must have’, ‘should have’, ‘could have’ and ‘would have’ – known as the MoSCoW principle.
  • Insufficient consultation with real users and practitioners.
  • Solving the ‘problem’ before you know what the problem is.
  • Lacking a clear understanding and making assumptions rather than asking.

Requirements gathering is about creating a clear, concise and agreed set of customer requirements that allow you to provide what the customer wants.

To be continued…

 

Dependency Inversion Principle(DIP)

Dependency Inversion Principle(DIP)

What Is Dependency Inversion Principle(DIP)?

In object-oriented design, the dependency inversion principle(DIP) refers to a specific form of decoupling software modules.
Dependency Inversion Principle or DIP has two key points:
1.Abstractions should not depend upon details;
2.Details should depend upon abstractions.
The principle could be rephrased as use the same level of abstraction at a given level. Interfaces should depend on other interfaces. Don’t add concrete classes to method signatures of an interface. However, use interfaces in your class methods.

Simple Example Of Dependency Inversion Principle

Consider the case of the Button object and the Door object.
The Button object senses the external environment. On receiving the Poll message, the Button object determines whether a user has “pressed” it. It doesn’t matter what the sensing mechanism is.
It could be a button icon on a GUI, a physical button being pressed by a human finger, or even a motion detector in a home security system. The Button object detects that a user has either activated or deactivated it.
The Door object affects the external environment. On receiving a TurnOn message, the Door object opens the door. On receiving a Close message, it closes that door.
How can we design a system such that the Button object controls the Door object?  The Button object receives Poll messages, determines whether the button has been pressed, and then simply sends the Open or Close message to the Door.

Consider the C# code implied by this model (above). Note that the Button class depends directly on the Door class. This dependency implies that Button will be affected by changes to Door. Moreover, it will not be possible to reuse Button to control a Motor object. In this model, Button objects control Door objects and only Door objects.

public class Button  
{  
private Door door;  
public void Poll()  
{  
if (/*some condition*/)  
door.Open();  
}  
}  
The above solution violates DIP.The abstractions have not been separated from the details.Without such a separation, the high-level policy automatically depends on the low-level modules, and the abstractions automatically depend on the details.

Finding the Underlying Abstraction

What is the high-level policy? It is the abstraction that underlies the application, the truths that do not vary when the details are changed. It is the system inside the systemit is the metaphor. In the
Button/Door example, the underlying abstraction is to detect an open/close gesture from a user and relay that gesture to a target object.
What mechanism is used to detect the user gesture? Irrelevant! What is the target object? Irrelevant!
These are details that do not impact the abstraction.
The model in above can be improved by inverting the dependency upon the Lamp object. In model below, we see that the Button now holds an association to something called a ButtonServer,which provides the interfaces that Button can use to turn something on or off. Door implements the ButtonServer interface.
Thus, Door is now doing the depending rather than being depended on.
The design above allows a Button to control any device that is willing to implement the ButtonServer interface. This gives us a great deal of flexibility. It also means that Button objects will be able to control objects that have not yet been invented.
However, this solution also puts a constraint on any object that needs to be controlled by a Button.
Such an object must implement the ButtonServer interface. This is unfortunate, because these objects may also want to be controlled by a Switch object or some kind of object other than a Button.
By inverting the direction of the dependency and making the Door do the depending instead of being depended on, we have made Door depend on a different detail: Button. Or have we?
Door certainly depends on ButtonServer, but ButtonServer does not depend on Button. Any kind of object that knows how to manipulate the ButtonServer interface will be able to control a Door .
Thus,the dependency is in name only. And we can fix that by changing the name of ButtonServer to something a bit more generic, such as SwitchableDevice. We can also ensure that Button and SwitchableDevice are kept in separate libraries, so that the use of SwitchableDevice does not imply the use of Button.
In this case, nobody owns the interface. We have the interesting situation whereby the interface can be used by lots of different clients, and implemented by lots of different servers. Thus, the interface needs to stand alone without belonging to either group. In C#, we would put it in a separate namespace and library.
Further Reading

1.Agile Software Development, Principles, Patterns, and Practices By Robert C.Martin

2. Head First Design Patterns by Freeman, Eric; Freeman, Elisabeth; Kathy, Sierra; Bert, Bates

3.Object Solutions: Managing the Object-Oriented Project by Grady Booch,

Interface Segregation Principle(ISP)

ISP Image

Interface Segregation Principle

The Interface Segregation Principle(ISP) deals with the disadvantages of “fat” interfaces.

The Interface Segregation Principle(ISP) states that no client code object should be forced to depend on methods it does not use. Basically, each code object should only implement what it needs, and not be required to implement anything else.

The ISP was first used and formulated by Robert C. Martin while consulting for Xerox.

I will tackle today’s principle using an example from MSDN

using System; 
using System.Collections.Generic; 
using System.Linq; 
using System.Text; 
 
namespace SOLIDPrinciplesDemo 
{ 
    // Only common generic methods exists for all derived classes. 
    interface IDataProvider 
    { 
        int OpenConnection(); 
        int CloseConnection(); 
    } 
    // Implement methods specific to the respective derived classe 
    interface ISqlDataProvider : IDataProvider 
    { 
        int ExecuteSqlCommand(); 
    } 
    // Implement methods specific to the respective derived classe 
    interface IOracleDataProvider : IDataProvider 
    { 
        int ExecuteOracleCommand(); 
    } 
    // Client 1 
    // Should not force SqlDataProvider client to implement ExecuteOracleCommand, as it wont required that method to be implemented. 
    // So that we will derive ISqlDataProvider not IOracleDataProvider 
    class SqlDataClient : ISqlDataProvider 
    { 
        public int OpenConnection() 
        { 
            Console.WriteLine("\nSql Connection opened successfully"); 
            return 1; 
        } 
        public int CloseConnection() 
        { 
            Console.WriteLine("Sql Connection closed successfully"); 
            return 1; 
        } 
 
        // Implemeting ISqlDataProvider, we are not forcing the client to implement IOracleDataProvider 
        public int ExecuteSqlCommand() 
        { 
            Console.WriteLine("Sql Server specific Command Executed successfully"); 
            return 1; 
        } 
    } 
    // Client 2 
    // Should not force OracleDataProvider client to implement ExecuteSqlCommand, as it wont required that method to be implemented. 
    // So that we will derive IOracleDataProvider not ISqlDataProvider 
    class OracleDataClient : IOracleDataProvider 
    { 
        public int OpenConnection() 
        { 
            Console.WriteLine("\nOracle Connection opened successfully"); 
            return 1; 
        } 
        public int CloseConnection() 
        { 
            Console.WriteLine("Oracle Connection closed successfully"); 
            return 1; 
        } 
        // Implemeting IOracleDataProvider, we are not forcing the client to implement ISqlDataProvider 
        public int ExecuteOracleCommand() 
        { 
            Console.WriteLine("Oracle specific Command Executed successfully"); 
            return 1; 
        } 
    } 
    class InterfaceSegregationPrincipleDemo 
    { 
        public static void ISPDemo() 
        { 
            Console.WriteLine("\n\nInterface Inversion Principle Demo "); 
 
            // Each client will implement their respective methods no base class forces the other client to implement the methods which dont required. 
            // From the above implementation, we are not forcing Sql client to implemnt orcale logic or Oracle client to implement sql logic. 
 
            ISqlDataProvider SqlDataProviderObject = new SqlDataClient(); 
            SqlDataProviderObject.OpenConnection(); 
            SqlDataProviderObject.ExecuteSqlCommand(); 
            SqlDataProviderObject.CloseConnection(); 
 
            IOracleDataProvider OracleDataProviderObject = new OracleDataClient(); 
            OracleDataProviderObject.OpenConnection(); 
            OracleDataProviderObject.ExecuteOracleCommand(); 
            OracleDataProviderObject.CloseConnection(); 
        } 
    } 
}

Advantages of Using ISP:

  1. Better Understandability
  2. Better Maintainability
  3. High Cohesion
  4. Low Coupling

Limitations

ISP like any other principle should be used intelligently when necessary otherwise it will result in a code containing lots of interfaces containing one method. So the decision should be taken intelligently.

Additional Resources

1.Agile Software Development, Principles, Patterns, and Practices-Robert C.Martin

2.SOLID Principles of Object Oriented Design– Steve Smith(Pluralsight)

3.Principles Of OOD

That’s it folks for this week,hope you learnt something in today’s topic,Interface Segregation Principle(ISP), the “I” in the S.O.L.I.D acronym,i will be concluding our series in the next post covering the Dependency Inversion Principle.

Liskov Substitution Principle

If you’ve been following my blog,you’ll know that we have been covering the SOLID Principles,in my previous posts i covered The Single Responsibility Principle,the “S” in the SOLID Acronym,then we covered The Open Closed Principle,today  my focus will be on The Liskov Substitution Principle,the “L” in the SOLID Principles acronym.

What is LSP?

This principle is credited to Barbara Liskov,she wrote this principle in 1988 and she said:

“What is wanted here is something like the following substitution property: If for each object o1
of type S there is an object o2 of type T such that for all programs P defined in terms of T, the
behaviour of P is unchanged when o1 is substituted for o2 then S is a sub-type of T.”

In simple terms the above statement means:

“Subtypes must be substitutable for their base types.”

or more simply:

“It means that we must make sure that new derived classes are extending the base classes without changing their behaviour” 

A Simple Example Using C#

We’ll use the classic Rectangle-Square problem to demonstrate this principle. Let’s imagine that we need to find the area of any rectangle.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLIDPrinciplesDemo
{
// If any module is using a Base class then the reference to that Base class can be replaced with a Derived class without affecting the functionality of the module.
// Or
// While implementing derived classes, one needs to ensure that, derived classes just extend the functionality of base classes without replacing the functionality of base classes.
class Rectangle
{
 protected int mWidth = 0 ;
 protected int mHeight = 0;
public virtual void SetWidth(int width)
{
mWidth = width;
}
public virtual void SetHeight(int height)
{
mHeight = height;
}
public virtual int GetArea()
{
  return mWidth * mHeight;
}
}
// While implementing derived class if one replaces the functionality of base class then,
// it might result into undesired side effects when such derived classes are used in existing program modules.
class Square : Rectangle
{
// This class modifies the base class functionality instead of extending the base class functionality
// Now below methods implementation will impact base class functionality.
public override void SetWidth(int width)
{
mWidth = width;
mHeight = width;
}
public override void SetHeight(int height)
{
mWidth = height;
mHeight = height;
}
}
class LiskovSubstitutionPrincipleDemo
{
private static Rectangle CreateInstance()
{
// As per Liskov Substitution Principle "Derived types must be completely substitutable for their base types".
  bool SomeCondition = false;
if (SomeCondition == true)
{
  return new Rectangle();
}
else
{
  return new Square();
}
}
public static void LSPDemo()
{
Console.WriteLine("\n\nLiskov Substitution Principle Demo ");
Rectangle RectangleObject = CreateInstance();
// User assumes that RectangleObject is a rectangle and (s)he is able to set the width and height as for the base class
RectangleObject.SetWidth(5);
RectangleObject.SetHeight(10);
// Now this results into the area 100 (10 * 10 ) instead of 50 (10 * 5).
Console.WriteLine("Liskov Substitution Principle has been violated and returned wrong result : " + RectangleObject.GetArea());
// So once again I repeat that sub classes should extend the functionality, sub classes functionality should not impact base class functionality.
}
}
}

I know the example above doesn’t exhaust the LSP principle,but im sure it gave you an idea what the principle stands for.

Benefits

This principle aims to keep functionality intact. The main purpose is to guarantee that objects lower in a relational hierarchy can be treated as though they are objects higher in the hierarchy. Basically, any child class should be able to do anything the parent can do.

Further Reading

1.Data Abstraction and Hierarchy,” Barbara Liskov, SIGPLAN Notices, 23(5) (May 1988)

2.Bertrand Meyer, Object-Oriented Software Construction, 2d. ed., Prentice Hall, 1997

3.Rebecca Wirfs-Brock et al. , Designing Object-Oriented Software, Prentice Hall,
1990.

4.Agile.Principles.Patterns.and Practices.In.C#[Robert.C.Martin]

That’s it for this week,next i will be covering Interface Segregation Principle(ISP),the letter “I” in the SOLID acronym.

****Happy Holidays and be safe****

The Open Closed Principle

Last week I covered the Single Responsibility Principe, which is part of the SOLID Principles, this week I will continue and cover the next principle  which is the Open/Closed Principle.

What is The Open Closed Principle?

Robert “Uncle Bob”  Martín, defines it like this:
The open/closed principle states “software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification”; that is, such an entity can allow its behaviour to be extended without modifying its source code.

Simply stated, this means that we should not have to change our code every time a requirement or a rule changes. Our code should be extensible enough that we can write it once and not have to change it later just because we have new functionality or a change to a requirement.

Imagine what life would be like if every new feature or requirement resulted in changing the source code, that would mean that one has to  test the code again and again, that would be a tedious thing, lest to say boring.

*Code Example

Let me illustrate with a code example

Background

 We were working on a project where we were using Oracle as the prime repository until one fine day our technical lead came up and announced that our application should support SQL Server as another data source.

Now our Data access layer had a DBConnection class with a connect method strongly bound to accept Oracle provider and establish a connection.

I ignored the detailed implementation and tried to keep things simple.

Initial Code

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLID
{
class DBConnection
{
public void Connect(OracleProvider ora)
{
Console.WriteLine("Connected to oracle data source");
}
}
class OracleProvider
{
string _providername;
}
}

As we were in a hurry for the delivery, we could only come up with this design for the existing dbConnection class so that it now supports SQL provider too.

After Refactoring

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLID
{
class DBConnection
{
public void Connect(object o)
{
if (o is OracleProvider)
{
Console.WriteLine("Connected to oracle data source");
}
else if (o is SQlProvider)
{
Console.WriteLine("Connected to sql data source");
}
}
}
class OracleProvider
{
string _providername;
}
class SQlProvider
{
string _providername;
}
}

OMG ! We designed something wrong …

Here if you can see from the above code, we are modifying the class DBConnection rather than extending it by including if else statements, this simply is against OCP.

Reason is simple if tomorrow another provider is introduced, again we have to make changes to the DBconnection class.

Ultimate refactoring …

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLID
{
interface IDBConnection
{
public void Connect();
}
class DBConnection
{
public void Connect(IDBConnection con)
{
con.Connect();
}
}
class OracleProvider :IDBConnection
{
string _providername;
public void Connect()
{
Console.WriteLine("Connected to oracle data source");
}        
}
class SQlProvider:IDBConnection
{
string _providername;
public void Connect()
{
Console.WriteLine("Connected to sql data source");
}        
}
}

Finally the main method ….

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SOLID
{
class Program
{
static void Main(string[] args)
{
//Sql connection 
            IDBConnection conSQl = new SQlProvider();
conSQl.Connect();
//oracle connection 
            //IDBConnection conOra = new SQlProvider();
            //conOra.Connect();
        }       
}
}

The above design we could come up seems to adhere to OCP.
Now tomorrow if we have to support another Data source, we can easily do it by simply extending IDBConnection needless to make any changes to DBConnection.

So the entities are now open for extension and closed for modification.

That’s it for this week, will be looking at the the Liskov Substitution Principle next,which is the letter “L” in the SOLID Principles acronym.

*Code example adapted from an article in code project

Design Principles To Increase Your Productivity As A Developer

Today I will tackle a topic that new to intermediate software developers struggle with, that’s a concept of design principles,if you master the design principles it will greatly improve your productivity as a software developer.

I know in some large organisations I know there is a person responsible for implementing the design principles or enforce them or ensures that they are implemented during software development.

In the following weeks I will be covering a concept called the SOLID principles, covering one or two concept per week.

First let me define what solid principles are. SOLID is actually an acronym for

S-Single Responsibility Principle(SRP)

O-Open/Closed principle(OCP)

L-Liskov Substitution Principle(LSP)

I-Interface Segregation Principle(ISP)

D-Dependency Inversion Principle(DIP)

5 TOOLS AND TECHNOLOGIES EVERY DEVELOPER SHOULD MASTER

THE SINGLE RESPONSIBLE PRINCIPLE

singleresponsibilityprinciple

Single responsible principle

Robert “Uncle Bob” Martin in his book: Agile Principles Patterns and Principles in C# states:

“A class should have only one reason to change “

Like life if you focus on so many things you will fail, in the same way If a class has more than one responsibility, the responsibilities become coupled. Changes to one responsibility may impair or inhibit the class’s ability to meet the others. This kind of coupling leads to fragile designs that break in unexpected ways when changed.

Like in the image above,the pocket knife is a super knife it can do almost everything,but in the process of adding all the components that make it a super knife it has lost its real purpose that is being a pocket knife.

In the same way you can overload a class with a lot of functionality to a point that it loses its identity

Let me illustrate with a C# code example

The following C# example shows a class named “RectangleShape” that implements two methods, one that calculates its rectangle area and one that draws the rectangle. When the area calculation changes for some reason or the drawing method has to change, for example, another fill color is used, then the whole class is under change. Also if the properties are altered, it influences both methods. After a code change, the class must be tested as a whole again. There is clearly more than one reason to change this class.

Problem

/// <summary>
/// Class calculates the area and can also draw it on a windows form object.
/// </summary>

public class RectangleShape
{
public int Height{ get; set; }
public int Width { get; set; }
public int Area()
{
return Width * Height;
}
public void Draw(Form form)
{
SolidBrush myBrush = new SolidBrush(System.Drawing.Color.Red);
Graphics formGraphics = form.CreateGraphics();
formGraphics.FillRectangle(myBrush, new Rectangle(0, 0, Width, Height);
}
}

Typically, the above class is used by consuming client classes like these:

/// <summary>
/// Consumes the RectangleShape */
/// </summary>
    public class GeometricsCalculator
{
public void CalculateArea(RectangleShape rectangleShape)
{
int area = rectangleShape.Area();
}
}   
/// <summary>
//// Consumes the RectangleShape */
/// </summary>
    public class GraphicsManager
{
public Form form {get;set;}
public void DrawOnScreen(RectangleShape rectangleShape)
{
rectangleShape.Draw(form);
}
}

Solution

The next classes show how to separate the different responsibilities. Basic coding is used not taking other SOLID principles into account. It only just shows how to deal with the SRP principle. The RectangleDraw class consumes now a RectangleShape instance and a Form object.

/// <summary>
/// Class calculates the rectangle's area.
/// </summary>
    public class RectangleShape
{
public int Height { get; set; }
public int Width { get; set; }
public int Area()
{
return Width * Height;
}
}
/// <summary>
/// Class draws a rectangle on a windows form object.
/// </summary>
    public class RectangleDraw
{
public void Draw(Form form, RectangleShape rectangleShape)
{
SolidBrush myBrush = new SolidBrush(System.Drawing.Color.Red);
Graphics formGraphics = form.CreateGraphics();
formGraphics.FillRectangle(myBrush, 
new Rectangle(0, 0, rectangleShape.Width,rectangleShape.Height));
}
}

The following code shows how to consume both classes:

/// <summary>
/// Consumes the RectangleShape */
/// </summary>
    public class GeometricsCalculator
{
public void CalculateArea(RectangleShape rectangleShape)
{
int area = rectangleShape.Area();
}
}
/// <summary>
/// Consumes the RectangleDraw and RectangleShape */
/// </summary>
    public class GraphicsManager
{
public Form form { get; set; }
public void DrawOnScreen(RectangleDraw rectangleDraw, RectangleShape rectangleShape)
{
rectangleDraw.Draw(form, rectangleShape);
}
}

Before i wrap up for today let me point out the advantages:

Major benefits of this pattern are

  • Code complexity is reduced by being more explicit and straightforward

  • Loose coupling

  • Improved readability