My Commitment To #100DaysOfCode

If you are in the software development circles,I’m sure you have heard of the #100DaysOfCode. The premise is to code at-least for an hour everyday.

History of #100DaysOfCode

The #100HoursOfCode was started by Alexander Kallaway a self-taught web developer, absolutely passionate about coding and the future of technology.

He started the initiative after he was frustrated with my slow speed of progress in learning to code.He needed  a way to force himself to get onto the right track and stay there.

My Reasons For The Commitment

  1. I want to become a better programmer but I realize that life becomes busy and I don’t practice enough.I always find other things to do rather than code.
  2. I’m taking the challenge to prepare for my certification exam 70-483 Programming in C#
  3. Create an online portfolio

My Commitment

Alexander Kallaway outlines these things that one commits to:

  1. I will code for at least an hour every day.
  2. I will tweet my progress every day, with the hashtag #100DaysOfCode and note which day of the challenge I’m on.
  3. If I code as part of my job, I will not count that time towards the challenge.
  4. Will only count the days where I spend at least some of my time building projects — not the days where I spend all my coding time working through lessons and tutorials.
  5. I will encourage and support at least two people each day in the #100DaysOfCode challenge on Twitter.
  6. I will only skip a day if something important comes up, and when I resume, I won’t count the day I skipped as one of my 100 days.

On top of the above I commit to :

  1. Make a review post of my weekly progress.

That’s my commitment and I’m sticking to it.

 

 

 

Mastery:Learn anything quickly using Feynman Technique

We all faced with the task of keeping our skills up to date,as there is always new technology to learn.

It may be overwhelming to master new concepts. To succeed and keep up with the ever-changing skills climate,one has to develop strategies to learn quickly.

 

What is The Feynman Technique?

The Feynman Technique  is a Mental Model named after Richard Feynman, a Nobel Prize Winning Physicist.Feynman was revered for his ability to clearly illustrate dense topics like quantum physics for virtually anybody.

Feynman came up with what he termed a four-step process to master any concept:

The Four Step Process

1)Pick a topic you want to understand and start studying it.

This is the most important step.Most people don’t master anything because they fail to define what to master.

Feynman advises you to write down everything you know about the topic on a notebook page, and add to that page every time you learn something new about it.

2)Teach it to a child

This step means that you must be in a position to breakdown the concept to an elementary level.Be in a position to explain it in layman terms to a person who has never encountered that concept and understand it.

3)Identify your knowledge gaps

As you progressively study,take note of areas where you struggling.It is pointless to press ahead even if not understanding anything.

Revisit your books,video tutorials or any other form of study material and fill in those knowledge gaps.

4). Simplify and use analogies.

 Repeat the process while simplifying your language and connecting facts with analogies to help strengthen your understanding.

“Repetition is the mother of learning, the father of action, which makes it the architect of accomplishment.

As Feynman illustrates in his mental model, learning can be a lifelong pursuit. This technique is designed to help you study for exams and learn new subjects, but it can be easily adapted to pursue deep work. Dedicating a notebook to a place where your knowledge can grow and evolve your ideas and provide inspiration to continue following a path of ongoing learning critical to the fundamentals of deeper, meaningful work.

 

Free Resources to Learn Software Development

Software Development is a field with a lot of opportunities and an acute shortage of people to fill the vacancies.

The cost of training as a software developer is prohibitively high.

Below is a list of a few online resources to up-skill as a developer without breaking the bank.

1.YouTube

YouTube is a great place to start,whenever you want to learn anything.

There are great channels to learn programming and all aspects of software development.Be it Web development,creating desktop applications,mobile applications.

2.Coursera

Coursera.org is a great resource. You can learn anything from web development,data structures and data science,taught by great schools like Stanford,John Hopkins University . etc

3.Havard CS50

To master Computer Science concepts,I highly recommend CS50.The Harvard free course can be found on platforms like YouTube, Cousera and the course website at no charge.

Additional Resources

To keep the post within reasonable length,i will list a few resources worth checking out.I included code practicing sites as well.

1.Edx

2.Microsoft Virtual Academy

3.Stanford University • Free Online Courses and MOOCs

4 Open University

5.TopCoder

6.Hacker Earth

7.Code Academy

8.Code School

9.Code.org

10.Bento

 

Feel free to suggest more links on the comments below.

Happy coding!

How to Grow As A Self-taught Software Developer

With such a high demand for software developers,comes with the rise in self-taught developers.

Most self-taught developers feel inadequate,compared to  the graduate developers .

So today i will point out a few things you can do to go past your inadequacies and be  a strong developer.

1.Have a Growth Plan

You can’t hit a target you cannot see, and you cannot see a target you do not have.

— Zig Ziglar

To grow you need a plan.

Know who you are  and your limitations and then develop a growth plan,with clear goals,action plans and time frames.

2.Master Computer Science(CS) concepts.

Mastering those makes the developer see the big picture of why we code the way we do and be efficient programmers.

Master concepts like data-structures,algorithms and recursion.

Most self-taught developers struggle with such concepts which stoops them from writing elegant code,which is easy to read and runs faster.

3.Read Documentation 

Most of us developers learn through channels like YouTube,stack overflow.To truly understand your tech stack,read documentation.

Documentation is not as easy to understand, but remember with any growth it’s hard before it becomes easy,keep at it eventually you will get the hang of it and then start reaping the rewards.

4.Find A Code Mentor

Every master was once a protege.To perfect your craft you need a mentor.

Your mentor doesn’t need to be a physical person you meet like everyday.You can follow writings by respected authorities in your field e.g Robert C. “Uncle Bob” Martin.

You can even ask your immediate supervisor to mentor you.

A great way to find mentors is by attending meetups in your area.

5.Embrace the Impostor Syndrome 

I know for most self-taught developers it feels like you in over your head and feel like an impostor,because at the back of your head you fill like you not qualified because of lack of formal qualifications.

Instead embrace that feeling and let it propel you to learn more and be a better developer.

6.Deliberately Practice

Imagine Cristiano Ronaldo practicing once a week or whenever he feels like.Do you think he will be playing at the level he is currently playing?

To be world-class you have to be willing to put in the time,practice those algorithms,puzzles and implementing those data-structures everyday and be deliberate about it.

It wont serve you to be doing the same stuff,playing at the same level,you don’t grow.

Use sites like Topcoder and hackerearth to practice and compete.

7.Teach Others

To grasp any concept,teach it to another person,especially a non-technical person.If you can rephrase it in layman terms,sure you have mastered it.Utilize local meetups,sign up to give a talk there.

If you liked the post be sure to like,share and subscribe to the blog 

 

The Big O – Calculations

This is a follow-up to my previous post ,where i explained a bit about the concept of Big O.

In this post I will continue and expand further showing you how to calculate the Big O.

Tedious Way of Calculating

The common way of calculating Big O is counting the number of steps the algorithm takes to complete a task.

This is a tedious exercise.Imagine if your input is 10,with that small number is easy to count the steps,what if your input is 1 million,then pit becomes impossible.

Example

void printUnOrderedPairs(int[] array)
{
for(int i=0;i < array.Length; i++)
{
for(int j=i+1; j <array.Length;j++)
{
Console.WriteLine(array[i] + "," +array[j]);
}
}
}

 

Counting the iterations

The first time through j runs for N-1 steps. The second time, it’s N-2 steps. Then N-3 steps. And so on.
Therefore, the number of steps total is:
(N-1) + (N-2) + (N-3) + … + 2 + 1

= 1 + 2 + 3 + … + N-1
= sum of 1 through N-1
The sum of 1 through N-1 is -2- (see”Sum o Integers 1 through N on, so the runtime will
be O(N2 ).

What It Means
Alternatively, we can figure out the runtime by thinking about what the code “means:’ It iterates through
each pair of values for ( i, j) where j is bigger than i.There are N2 total pairs. Roughly half of those will have i < j and the remaining half will have i > j. This
code goes through roughly Nx pairs so it does O(N2) work.
Visualizing What It Does
The code iterates through the following ( i, j) pairs when N = 8:
(0, 1) (0, 2) (0, 3) (0, 4) (0, 5) (0, 6) (0, 7)
(1, 2) (1, 3) (1, 4) (1, 5) (1, 6) (1, ‘7)
(2, 3) (2, 4) (2, 5) (2, 6) (2, 7)
(3, 4) (3, 5) (3, 6) (3, 7)
(4, 5) (4, 6) (4, 7)
(5, 6) (5, 7)
(6, 7)
This looks like half of an NxN matrix, which has size (roughly) N/i. Therefore, it takes O ( N2) time.

(Adapted from “Cracking The Coding Interview“)

The Easy Way of Calculating Big O

The easier way of calculating Big O is using a Principle called “The Multiplication rule” which states:

“The total running time of a statement inside a group of nested loops is the running time of the statements multiplied by the product of the sizes of all the for loops. We need only look at the maximum potential number of repetitions of each nested loop.”

Applying the Multiplication Rule to the above example we get:

The inner for loop potentially executes n times (the maximum value of i will be n-1) and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n * n, so the Big O Notation of the above example is expressed by O(n2)

More Examples

//Example 2
for(int i = 0; i < n*n; i++)
sum++;
for(int i = 0; i < n; i++)
for(int j = 0; j < n*n; j++)
sum++;

We consider only the number of times the sum++ expression within the loop is executed. The first sum expression is executed n2 times. While the second sum++ expression is executed n * n2 times. Thus the running time is approximately n2 + n3. As we only consider the dominating term – the Big O Notation of this fragment is expressed by O(n3).

Note that there were two sequential statements therefore we added n2 to n3. This problem illustrates another rule that one may apply when determining the Big O. You can combine sequences of statements by using the Addition rule, which states that the running time of a sequence of statements is just the maximum of the running times of each individual statement.

/Example 3
for(int i =0; i < n; i++)
for(int j = 0; j < n*n; j++)
for(int k = 0; k < j; k++)
sum++;

We note that we have three nested for loops: the innermost, the inner and the outer for loops. The innermost for loop potentially executes n*n times (the maximum value of j will be n*n-1). The inner loop executes n*n times and the outer for loop executes n times. The statement sum++; therefore executes not many times less than n*n * n*n * n, so the Big O Notation of Example 2 is expressed by O(n5).

//Example 4
for(int i = 0; i<210; i++)
for(int j = 0; j < n; j++)
sum++;

Note that the outer loop runs 210 times while the inner loop runs n times.

Hence the running time is:210 * n. The Big O Notation of this Fragment is expressed by O(n) (ignoring constants).

One More…

//Example 5
for(int i = 1; i < n; i++)
for(int i = n; i > 1; i = i/2)
sum++;

The first loop runs n times while the second loops runs log n times. (Why? Suppose n is 32, then sum++; is executed 5 times. Note 25 = 32 therefore log32 = 5. Thus the running time is actually log2 n, but we can ignore the base.) Hence the Big O Notation of Example 5 is expressed by O(nlogn ) (Multiplication Rule.)

 

Further Reading

Malik, DS. 2003. Data structures using C++.

 A practical introduction to data structures and algorithm analysis

Introduction to Algorithms, 3rd Edition (MIT Press) 3rd Edition

Cracking the Coding Interview: 189 Programming Questions and Solutions

 

The Big O – Explained

Big O notation (with a capital letter O, not a zero), also called Landau’s symbol, is a
symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines.

Landau’s symbol comes from the name of the German number theoretician Edmund
Landau who invented the notation. The letter O is used because the rate of growth of a
function is also called its order.

What is The Big O ?

Big o time is the language and metric we use to describe the efficiency of algorithms.It is very important as a developer to have a good grasp of this concept as it can either break or make you as a developer.

Not understanding big O thoroughly can really hurt you in developing an algorithm. Not only might you be judged harshly for not really understanding big 0, but you will also struggle to judge when your algorithm is getting faster or slower.

Understanding Big O

How efficient is an algorithm or piece of code? Efficiency covers lots of resources,
including:

  • CPU (time) usage
  • memory usage
  • disk usage
  •  network usage

All are important but we will mostly talk about time complexity (CPU usage) and also Space complexity(memory usage).

Be careful to differentiate between:
1. Performance: how much time/memory/disk/… is actually used when a program is run. This depends on the machine, compiler, etc. as well as the code.
2. Complexity: how do the resource requirements of a program or algorithm scale,
i.e., what happens as the size of the problem being solved gets larger?

Complexity affects performance but not the other way around.
The time required by a function/procedure is proportional to the number of “basic
operations” that it performs.

Here are some examples of basic operations:

  • one arithmetic operation (e.g., +, *).
  • one assignment (e.g. x := 0)
  • one test (e.g., x = 0)
  • one read (of a primitive type: integer, float, character, boolean)
    one write (of a primitive type: integer, float, character, boolean)

Some functions/procedures perform the same number of operations every time they are
called. For example, StackSize in the Stack implementation always returns the number of
elements currently in the stack or states that the stack is empty, then we say that
StackSize takes constant time.
Other functions/ procedures may perform different numbers of operations, depending on
the value of a parameter. For example, in the BubbleSort algorithm, the number of
elements in the array, determines the number of operations performed by the algorithm.
This parameter (number of elements) is called the problem size/ input size.

Typically, we are usually interested in the worst case: what is the maximum number of
operations that might be performed for a given problem size. For example, inserting an
element into an array, we have to move the current element and all of the elements that
come after it one place to the right in the array. In the worst case, inserting at the
beginning of the array, all of the elements in the array must be moved. Therefore, in the
worst case, the time for insertion is proportional to the number of elements in the array,
and we say that the worst-case time for the insertion operation is linear in the number of
elements in the array. For a linear-time algorithm, if the problem size doubles, the
number of operations also doubles.

I will be explaining how to calculate Big O in my next post.

To be continued…

SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC) Pt 2

 

Feasibility study

This post is a continuation of my previous article.

2.Feasibility Study

Feasibility study is an assessment of the practicality of a proposed project or system. In simple terms, a feasibility study involves taking a judgment call on whether a project is doable. The  criteria to judge feasibility are cost required and value to be delivered.

A feasibility study evaluates the project’s potential for success. The perceived objectivity is an important factor in the credibility of the study for potential investors and lending institutions. [Source: Wikipedia]

There are some aspects that the feasibility study tackles that as developers we normally forget that they exist. One example is the legal implications of creating and deploying a system.

Five Areas of Project Feasibility:

  1. Technical Feasibility : Assessment is centered on the technical resources available to the organization. It helps organizations asses if the technical resources meet capacity and whether the technical team is capable of converting the ideas into working systems. Technical feasibility also involves evaluation of the hardware and the software requirements of the proposed system.
  2. Economic Feasibility  : It helps organizations assess the viability, cost, and benefits associated with projects before financial resources are allocated. It also serves as an independent project assessment, and enhances project credibility, as a result. It helps decision-makers determine the positive economic benefits to the organization that the proposed system will provide, and helps quantify them. This assessment typically involves a cost/ benefits analysis of the project.
  3. Legal Feasibility – investigates if the proposed system conflicts with legal requirements like data protection acts or social media laws.
  4. Operational Feasibility : It involves undertaking a study to analyze and determine whether your business needs can be fulfilled by using the proposed solution. It also measures how well the proposed system solves problems and takes advantage of the opportunities identified during scope definition. Operational feasibility studies also analyze how the project plan satisfies the requirements identified in the requirements analysis phase of system development. To ensure success, desired operational outcomes must inform and guide design and development. These include such design-dependent parameters such as reliability, maintainability, supportability, usability, disposability, sustainability, affordability, and others.
  5. Scheduling Feasibility  : It is the most important for project success. A project will fail if not completed on time. In scheduling feasibility, we estimate how much time the system will take to complete, and with our technical skills we need to estimate the period to complete the project using various methods of estimation.[Source: Simplilearn]

Benefits of Conducting a Feasibility Study

  1. Saves money on development hours that may be lost developing a product that may end up not making it to the market.
  2. Gives project teams more focus and provides an alternative outline.
  3. Narrows the business alternatives.
  4. Identifies a valid reason to undertake the project.
  5. Enhances the success rate by evaluating multiple parameters.
  6. Aids decision-making on the project.

Next will cover system analysis and design.

 

 

 

 

 

 

 

 

Software Development Life Cycle(SDLC)

Development-Life-Cycle2

What Is A Software Development Life Cycle (SDLC)?

The Software Development Life Cycle, SDLC for short, is a well-defined, structured sequence of stages in software engineering to develop the intended software product.A framework that describes the activities performed at each stage of a software development project.

SDLC has a series of steps to be followed to design and develop a software product efficiently. The framework includes the following steps:

  1. Requirements Gathering
  2. Feasibility study
  3. System analysis
  4. Software design
  5. Coding
  6. Testing
  7. Integration
  8. Implementation
  9. Operation and Maintenance
  10. Disposition

 

1.Requirements Gathering

Requirements gathering is an essential part of any project and project management. Understanding fully what a project will deliver is critical to its success. Requirements gathering sounds like common sense, but surprisingly, it’s an area that is given far too little attention.

Many projects start with the barest headline list of requirements, only to find later the customer’s needs have not been adequately understood.

*10 Rules for Successful Requirements Gathering

To be successful in requirements gathering and to give your project an increased likelihood of success, follow these rules:

  1. Don’t assume you know what the customer wants – always ask.
  2. Involve the users from the start.
  3. Define and agree on the scope of the project.
  4. Ensure that the requirements are SMART: Specific, Measurable, Agreed upon, Realistic and Time-based.
  5. Gain clarity if there is any doubt.
  6. Create a clear, concise and thorough requirements document and share it with the customer.
  7. Confirm your understanding of the requirements alongside the customer (play them back).
  8. Avoid talking technology or solutions until the requirements are fully understood.
  9. Get the requirements agreed with the stakeholders before the project starts.
  10. Create a prototype, if necessary, to confirm or refine the customer’s requirements.

Common Mistakes

Be careful to avoid making these mistakes:

  • Basing a solution on complex or cutting-edge technology and then discovering that it cannot easily be rolled out in the ‘real world’.
  • Not prioritizing the requirements, for example, ‘must have’, ‘should have’, ‘could have’ and ‘would have’ – known as the MoSCoW principle.
  • Insufficient consultation with real users and practitioners.
  • Solving the ‘problem’ before you know what the problem is.
  • Lacking a clear understanding and making assumptions rather than asking.

Requirements gathering is about creating a clear, concise and agreed set of customer requirements that allow you to provide what the customer wants.

To be continued…

 

10000 Hour Rule Fact or ???

1000 Hour Rule

Does the 10000 hour rule apply to developers??

The 1000 hour rule which is credited to Malcolm Gladwell,an English-born Canadian journalist, author, and speaker.

The principle holds that 10,000 hours of “deliberate practice” are needed to become world-class in any field.

In his book,Outliers,he contends that Bill Gates became world class by putting in his 10000 hours by interacting with a computer at an early age.

With regards to software developers or programmers I beg to differ,with a lot of bootcamps around,which continually churn out programmers every six months or so.

That proves that a skill can be mastered in less than the 10000 hours recommended by Gladwell.

Quoting a recent Princeton study which tears down the theory.

In a meta-analysis of 88 studies on deliberate practice, the researchers found that practice accounted for just a 12% difference in performance in various domains.

What’s really surprising is how much it depends on the domain:

• In games, practice made for a 26% difference

• In music, it was a 21% difference

• In sports, an 18% difference

• In education, a 4% difference

• In professions, just a 1% difference

One good example that disproves the theory is the case of Muhammad Hamza Shahzad,a boy aged 7 who became the world’s youngest qualified computer programmer and the case of Zora Ball,the youngest game programmer.

I know some would argue that those are special cases,but who isn’t a special case?

I believe the theories and the rules are made to tie down people and to make people question their abilities.I doubt there is a day after 5 and and a half years you wake up and feel like you’ve put in your 10000 hours and you ready to conquer the world.

My Reading List For 2017

Reading list

That’s not me of course

Every year towards my birthday I review my previous year to check how far I’ve gone with respect to my goals and plans.
I also take time to create reading list for the year ahead.
Today i would share some of the books that i will be devouring in 2017.
The books that interest me usually have to do with computer science,leadership and of course my faith which is Christianity.
I will focus on computer science or programming.

Computer Science/Programming

Algorithms

Introduction to algorithms

1.Introduction to algorithms

This is my all time favourite book,when it comes to the study of algorithms and data structures.
I am a huge fan of algorithms and data structures,for any beginning programmer i will recommend this book anytime as it covers the basics of algorithms and which are very important in programming and distinguish between a good software developer and a great developer

2.Clean Code:A Handbook of Agile Software Craftsmanship

That’s another book i have read a couple of times.It keeps me grounded on how to construct software.
I am a huge fan of Robert C.Martin,i would pick up any book by him.

3.Clean Architecture: A Craftsman’s Guide to Software Structure and Design

Another book by Robert C.Martin,I’m sure it will be a great read.Its due to be released later during the year.

4.Soft Skills: The software developer’s life manual

Every developer needs soft skills,that is one area where most developers struggle.
Somnez covers that aspect of a developer in great detail.
I have been following John’s blog for some time now,so i trust that his book will be a good read.

5.Code Complete: A Practical Handbook of Software Construction

 

That’s my reading list for 2017,if you want me to share my whole list on other categories just leave comments in the comments section.
keep coding!!