The complexity of an algorithm shows how fast or slow that particular algorithm can solve the desired problem.

- We can also define it as, it is a measurement of the amount of time and/or space required by an algorithm for a given input of size (n) to solve a problem.
- Basically, we denote complexity with the help of numerical function T(n).
- Where T(n) is a function of time versus input size n.
- A given algorithm may take different amounts of time on the same amount of inputs because time depends on various factors such as the speed of the processor, instruction set, disk speed, brand of compiler and etc.

## We calculate complexity of algorithm in these three scenarios

### 1). Worst Case

It denotes the maximum run time that could be taken by an algorithm over all inputs of size n. That is, we only consider the “number of times the principle activity of that algorithm is performed.”

### 2). Best Case

In the best case, we look at specific instances of input of size n and assume a happy case. For example, suppose we want to sort a list, but our input list is already sorted then here algo performs its best behavior.

### 3). Average Case

It is the most useful measure for the complexity of an algorithm. Generally, the worst case and best case behavior are extremely rare, So that we are more concerned about how the algorithm runs in the general case.

## Type of complexity

**Time complexity****Space complexity**

### 1. Time complexity

The time complexity of an algorithm gives the total amount of time taken by the program to complete its execution.

- Big O asymptotic notation is commonly expressed the time complexity of algorithms.
- Time Complexity is estimated by counting the number of principle activity or elementary step performed by an algorithm to finish execution.

**Let’s take an example to understand time complexity:**

For any problem, there may be N number of solutions are available, but our main aim is to choose only those solutions which are efficient for our problem. Our solution should give the result in less time.

We can write many programs for a single problem. Let’s See the below example to understand this concept. Below are the **two different algorithms to find a square of a number:**

**Example 1:**

```
for i=1 to n
do n = n + n
// when the loop ends n will hold its square
return n
```

In the above solution, we have calculated the square using “for” loop. Here “for” loop is running for n times, every time it starts with the number n and adding n to it.

Because of the loop is running n times, so time complexity will be n at least in this case. When the value of n increases the time taken by the program to complete the execution will also increase.

**Example 2:**

`return n*n;`

In the above solution mathematical operator “*” has been used to calculate the square of a number. It will take only constant time, so it’s time complexity will be constant.

So in the above two solutions second one is best.

The performance of an algorithm may vary with different types of input data. So, for the time complexity analysis we analyze in the worst case because worst-case analysis gives the maximum amount of time taken by an algorithm to complete execution for input size n.

### 2. Space Complexity

Space complexity analysis for an algorithm is an analysis to calculate the total amount of memory taken by an algorithm to complete execution and give the desired result for an input n. In space complexity analysis input value to the algorithm is also considered.

In other words we can also define it as, Space complexity refers to the amount of memory an algorithm needs in relation to the size of its input.

The memory is taken by an algorithm is in different forms:

1). Variables (This includes the constant values and temporary values)

2). Program Instruction

3). Execution

**Example 1:**

**Iterative Fibonacci Series (Low Space Complexity)**

```
int fibonacci_iterative(int n) {
if (n <= 1) {
return n;
} else {
int a = 0, b = 1, c, i;
for (i = 2; i <= n; i++) {
c = a + b;
a = b;
b = c;
}
return b;
}
}
```

**Constant Variables**: The function uses a fixed number of variables (`a`

,`b`

,`c`

, and`i`

).**No Recursion or Dynamic Allocation**: There are no recursive calls or dynamic memory allocations.**Space Complexity**: The space required does not depend on the input`n`

. It remains constant, leading to a space complexity of O(1), which is considered highly efficient in terms of space usage.

**Example 2:**

**Space Complexity of Recursive Fibonacci Series**

```
int fibonacci_recursive(int n) {
if (n <= 1) {
return n;
} else {
return fibonacci_recursive(n - 1) + fibonacci_recursive(n - 2);
}
}
```

**Call Stack Growth**: Each recursive call adds a frame to the call stack. Since the function calls itself twice for each non-base case, the number of calls grows exponentially with`n`

. However, the maximum depth of the call stack at any given time is`n`

.**No Additional Memory Usage**: Aside from the call stack, the function does not use additional memory (like arrays or dynamically allocated structures).**Space Complexity**: The space complexity is O(n) due to the call stack. For large`n`

, this can lead to substantial memory usage, making it less space-efficient than the iterative approach.

## Auxiliary Space

Auxiliary space is extra space or temporary space used by the algorithms during its execution.

### Various memories is usages by program during its execution

**Instruction Space:** It is used by the program to save compiled instruction in the memory.**Environmental Stack:** It is used to store the addresses while a module calls another module or functions during execution.**Data space:** It is used to store data, variables, and constants which are stored by the program and it is updated during execution.