See the following lecture for more information: decision making
Algorithms have always existed, it is a thought process described in a logical way. Although this system seems to us today indispensable for many sciences, the systematization of algorithmic writing has evolved over time and technologies. It began at the time of the Babylonians, until concretely established by René Descartes in the general method proposed by Discours de la méthode in 1637.
The modern algorithmic, including a language called « program » made its debut soon after in 1677. The term that we use today, « algorithm », appears two centuries later. Twentieth and twenty-first century algorithms are often based on formalisms like Turing machines, which provide a scientific framework for studying the properties of algorithms.
With computer science, algorithmics have been developed a lot in the second half of the twentieth century and Donald Knuth, author of the treatise The Art of Computer Programming, laid a rigorous mathematical foundation for their analysis.
This flow of instructions is also known as the execution flow of a program and can be represented by a control flow graph or a PLC. A certain programming paradigm that uses multiple processors exposes itself to the problem of assigning tasks in the processors.
Each computer language has a type of control structure and data structure. The writing of an algorithm is very often based on widely used languages such as C, Java, etc.
Correctness, completeness and termination
Termination is the assurance that the algorithm will finish in a finite time. Termination evidence usually involves a strictly decreasing positive integer function at each step/iteration of the algorithm.
Given the guarantee that an algorithm will end, the proof of correction must provide the assurance that if the algorithm ends by giving a solution, then this solution is correct (in the sense that it answers the problem).
The proof of completeness guarantees that the algorithm give a solution for each of the entries of the algorithm, for a given problem space.
algorithm: the « natural » performance of an algorithm outside the environment in which it will be implemented.
Complexity classes are sets of problems that have the property of having all the same complexity according to a certain criterion. The best-known classes are classes defined on Turing machines (deterministic or non-deterministic – see the course on automata) with time and space constraints. There is a relationship between algorithmic complexity and its energy cost of operation, called the Landauer principle.
Here is a list of the most frequently encountered complexities:
|constant complexity: an assignment, a basic calculation, etc.|
|logarithmic complexity: dichotomy, tree|
|linear complexity: a loop|
|quasi-linear complexity: divide and conquer, tree|
|quadratic complexity: nesting two loops|
|cubic complexity: nesting three loops|
|polynomial complexity: nesting sereval loops|
|quasi-polynomial complexity: divide and conquer|
|exponential complexity: tree, brute force|
|factorial complexity: naive approach to combinatorial enumeration|