The efficiency of an algorithm defines the number of computational resources used by an algorithm and time taken by an algorithm to produce the desired result.
- An algorithm which takes fewer resources and computes results in a minimum time for a problem then that algorithm is known as efficient.
- The efficiency of the algorithm is measured based on the usage of different resources and minimum running time.
- The efficiency of an algorithm depends on its design, implementation, resources, and memory required by it for the computation.
- Since every algorithm uses computer resources to run, But to analyze an algorithm, execution time and internal memory are essential to consider.
If an algorithm contains only linear functions (having no loops or recursions), we calculate its efficiency or running time by counting the number of instructions it contains.
However, For an algorithm having loops, then the efficiency of that algorithm depends on the a number of loops and the running time of each loop in the algorithm.
Below are examples to determine the efficiency of an algorithm
1. Linear Loop:
In Linear Loop, loop statement executes when the loop controlling variable either increase by adding some constant or subtracting by some constant and satisfy the given conditions.
For an example consider the given loop below:
Example 1:
for(i=0;i<50;i++){
some block of statement ;
}
Here, 50 is the loop factor. We already know that the efficiency of an algorithm and programs are directly proportional to the number of iterations of a loop.
Hence, for a linear loop, the general formula may be given as
f(n) = n
Let’s see another example given below:
Example 2:
for(i=0;i<50;i+=2){
some block of a statement ;
}
Here, the iterations of for loop is just half of the number of the loop factor. So, efficiency can be given as f(n) = n/2
2. Logarithmic Loops:
Above we have already learned about the linear loops, and we have seen that loop statement execution and loop controlling variable’s increment by adding some constants in it. When we combine or subtract a constant from the loop controlling variable, we call that loop a Linear Loop. However, in logarithmic loops, we multiply or divide the loop-controlling variable by a constant (not 0) during each iteration of the loop.
For an example consider the given loop below:
Example 1:
for(i=1;i<=500;i*=2){
some block of a statement ;
}
In the above example of for loop, the loop-controlling variable, i is multiplying by 2 each time till conditions satisfy. In this loop, i
grows logarithmically. It takes on the values 1, 2, 4, 8, 16, 32, 64, 128, 256, and finally 512. At 512, the condition i <= 500
fails, and the loop exits. The loop demonstrates the efficiency of logarithmic loops by running only a few times relative to the upper limit of 500, especially in situations that require exponential growth or reduction in the controlling variable.
Example 2:
for(i=500;i>=1;i/=2){
some block of a statement ;
}
In this example of for loop the loop-controlling variable, i is divided by 2 in each iteration till conditions satisfy. In this case, also, the loop will be executed nine times, not 500 times because in each iteration, the value of i halves. Thus, the number of iterations is a function of the number by which the loop-controlling variable is divided or multiplied.
In the above examples, the loop-controlling variable either multiplied or divided by 2. So, the number of loop iterations can be given by log 500 base two which is approximately equal to 9.
How to design an efficient algorithm?
In most of the cases, the inefficiency of an algorithm is caused by the redundant computation or unnecessary use of storage.
Let’s see the example
a=0;
for (i=0;i<=n;i++){
a=a+2;
y=(a*a*a)+(x*x)+a;
printf("%d",y);
}
Here in the above example (x*x*x) is recalcuting again and again in every loop. So we can avoid it by precalculating it outside the loop.Like:
a=0;
x1=(x*x*x);
for (i=0;i<=n;i++){
a=a+2;
y=x1+(a*a)+a;
printf("%d",y);
}