Time complexity

Let P be a problem and M be a method to solve this problem. The algorithm is a description with control structures and data to write the method M in a language recognizable by any individual or machine.
As a reminder, the control structures are: a sequence, a branch (or selection), a loop (or iteration). Data structures are: constants, variables, arrays (or ordered storage space), recursive structures (lists, graphs, etc.).

Basic operation

The complexity consists in evaluating the efficiency of the method M and comparing it with another method M’. This comparison is independent of the environment (machine, system, compiler, language, etc.). Efficiency depends on the number of elementary operations. These depend on the size of the data and the nature of the data.

Let n denote the size of the data and T(n) the number of elementary operations. The evaluation of effectiveness will be in the best case, the worst case and the average case.

In the case of a sequence type structure, the evaluation is equal to the sum of the costs. For example, if the algorithm has a processing T1(n) followed by T2(n), then T(n)=T1(n)+T2(n).

In the case of a branch, the evaluation is equal to the maximum of the branch lines. For example, the algorithm executes T1(n) else T2(n), then T(n)=max(T1(n), T2(n)).

In the case of a loop, the evaluation is equal to the sum of the costs of the successive passes. For example, if the algorithm is a « while doing Ti(n) with the complexity of Ti(n) depending on the number i of the iteration. » Then the complexity is T(n)=sum(i=1 to n)Ti(n).

For a recursive version, I invite you to go to the corresponding page. Let C(n) be the processing performed in a divide & conquer function. Then the complexity is T(n)=2*T(n/2)+C(n)=nlog(n) if C(n)=n.

Landau’s notation

Landau’s notation characterizes the asymptotic behavior of a function, that is, the behavior of f(n) when n tends to infinity.

We say that f(n) is a Big O of g(n) si.\exists k>0, \exists n_0 \; \forall n>n_0 \; |f(n)| \leq |g(n)|\cdot k .

complexity1

Examples

complexity2

complexity3

complexity4

Worse, better and average

We can calculate for most algorithms a complexity in the worst case (the greatest number of elementary operations), the best and on average. The average is a weighted sum of the probability of presence of possible complexities. Most often, the complexity is used for the worst because we wish to know an upper bound of the execution time.

Consider an algorithm for finding an element in a table in iterative form. The algorithm stops when the value has been found. The algorithm is broken down as follows: assignment of 0 to i, as long as i has not traversed the table we increment i, if tab[i]=value then we return true, otherwise at the end of the array we return false. Note C the complexity of the loop.

In the worst case, the algorithm traverses the entire array, ie T(n)=1+n*C=O(n). At best, the element is at the beginning of the array, so T(n)=1+C=O(1). We consider that on average, there is a 50% chance that the element is not in the table, and 50% that it is at half of the table (this is of course absurd, but we will not do all the possibilities). In this case, the average complexity is T(n)=0.5*(1+n*C)+0.5*(1+n*C/2)=a*n+b with a and b constants =O(n). We notice that the average asymptotic behavior is the same as the worst case.

 

Publicités