**Ans: **

The efficiency of an algorithm defines the number of computational resources used by an algorithm and time taken by an algorithm to produce the desired result.

- An algorithm which takes fewer resources and computes results in a minimum time for a problem then that algorithm is known as efficient.
- The efficiency of the algorithm is measured based on the usage of different resources and minimum running time.
- The efficiency of an algorithm depends on its design, implementation, resources, and memory required by it for the computation.
- Since every algorithm uses computer resources to run, But to analyze an algorithm, execution time and internal memory are essential to consider.

If an algorithm contains only linear function (having no loops or recursions), the efficiency of that algorithm or the running time of that algorithm is calculated by the number of instructions it contains.

However, For an algorithm having loops, then the efficiency of that algorithm depends on the a number of loops and the running time of each loop in the algorithm.

Page Contents

## Below are examples to determine the efficiency of an algorithm

**Linear Loop:**

In Linear Loop, loop statement executes when the loop controlling variable either increase by adding some constant or subtracting by some constant and satisfy the given conditions.

For an example consider the given loop below:

for(i=0;i<50;i++){ some block of statement ; }

Here, 50 is the loop factor. We already know that the efficiency of an algorithm and programs are directly proportional to the number of iterations of a loop.

Hence, for a linear loop, the general formula may be given as

f(n) = n

Let’s see another example given below:

for(i=0;i<50;i+=2){ some block of a statement ; }

Here, the iterations of for loop is just half of the number of the loop factor. So, efficiency can be given as f(n) = n/2

**Logarithmic Loops:**

Above we have already learned about the linear loops, and we have seen that loop statement execution and loop controlling variable’s increment by adding some constants in it. If we combine or subtract some constant with loop controlling variable, then that loop is called a Linear Loop.

However, in logarithmic loops, the loop-controlling variable is either multiplied or divided by some constant (Not 0) during each iteration of the loop.

For an example consider the given loop below:

for(i=1;i<=500;i*=2){ some block of a statement ; }

In this example of for loop the loop-controlling variable, i is divided by 2 each time till conditions satisfy. The loop will be executed only nine times and not 500 times because in each iteration the value of i doubles.

for(i=500;i>=1;i/=2){ some block of a statement ; }

In this example of for loop the loop-controlling variable, i is divided by 2 in each iteration till conditions satisfy. In this case, also, the loop will be executed nine times, not 500 times because in each iteration, the value of i halves. Thus, the number of iterations is a function of the number by which the loop-controlling variable is divided or multiplied.

In the above examples, the loop-controlling variable either multiplied or divided by 2. So, the number of loop iterations can be given by log 500 base two which is approximately equal to 9.

## How to design an efficient algorithm?

In most of the cases, the inefficiency of an algorithm is caused by the redundant computation or unnecessary use of storage.

**Let’s see the example**

a=0; for (i=0;i<=n;i++){ a=a+2; y=(a*a*a)+(x*x)+a; printf("%d",y); }

Here in the above example (x*x*x) is recalcuting again and again in every loop. So we can avoid it by precalculating it outside the loop.Like:

a=0; x1=(x*x*x); for (i=0;i<=n;i++){ a=a+2; y=x1+(a*a)+a; printf("%d",y); }