Dynamic programming is used to solve many problems from operational research, so we will not list tutorials related to this method.
Introduction
Dynamic Programming is an exact method for solving optimization problems, mainly due to R. Bellman (1957). Specifically, dynamic programming is an algorithm design paradigm that can be applied to solve a problem if it has Bellman’s optimality.
Dynamic programming is similar to the divide and rule method. The significant difference between these two methods is that dynamic programming allows subproblems to overlap. In other words, a subproblem can be used in the solution of two different subproblems. While the divide and conquer approach creates separate subproblems that can be solved independently of one another.
A second difference between these two methods is, as illustrated by the figure above, is that the divide and conquer method is recursive, the recursion takes the global problem to break it into an elementary problem. Dynamic programming is a method whose calculations are done from the bottom up: we start by solving the smaller subproblems. By combining their solution, we get the solutions of subproblems bigger and bigger.
Principle
The paradigm has four issues:
 Characterization of the structure of an optimal solution.
 Recursive definition of the value of the optimal solution.
 Bottomup calculation.
 Construction of the solution from the ascending calculation.
A table is constructed to memorize the calculations already made: to each element will correspond the solution of one and only one intermediate problem, and one for the final solution. We have to determine the subissues to be addressed during the calculation.
There are two approaches to completing the table:
 Iterative: One initializes the « elements » corresponding to the basic cases.
We then fill the table according to a very specific order to be determined: we start with the smallest problems of « size », we end with the solution of the main problem: it is necessary that at each calculation, we use only the already calculated solutions.  Recursive: The same principle as the iterative approach, this method will only calculate what is strictly necessary to achieve the given objective.
Example : matrix product
Let M_{1}, …, M_{n }be n matrix , each matrix have m_{i} rows and m_{i+1} columns, the inputs are real numbers (linear problem). We want to compute M_{1}*…*M_{n} with a minimal number of time complexity.
Let c_{ij} be the time complexity to compute M_{i}*…*M_{j}. Then we have c_{ii}=0 and c_{i(i+1)}=m_{(i1)}*m_{i}*m_{(i+1)}. Let’s split this subproblem by calculating the best c_{ij} with M_{i}*…*M_{k} andM_{(k+1)}*…*M_{j}. Then: c_{ij} = min [c_{ik} + c_{(k+1)j} + m_{i}*m_{(k+1)}*m_{(j+1)}] with k from i to j1, the last term needs to multiplying the results of the matrix product from i to k with that of k+1 to j.
We have the following program:
 c_{ij} = 0 if i = j;
 c_{i(i+1)}=m_{(i1)}*m_{i}*m_{(i+1)};
 c_{ij} = min [c_{ik} + c_{(k+1)j} + m_{i}*m_{(k+1)}*m_{(j+1)}] otherwise.
The table of the dynamic program takes as input the number of operations carried out according to the matrices chosen. Take the following example:
i

1

2

3

4

5

6

7

m_{i}

30

35

15

5

10

20

25

The initial table is as follows:
c_{ij}

1

2

3

4

5

6

1

0

–

–  –  –  – 
2

–  0  –  –  –  – 
3

–  –  0  –  –  – 
4

–  –  –  0  –  – 
5

–  –  –  –  0  – 
6

–  –  –  –  –  0 
Then we can calculate c_{ij }with two matrices (without substructure principle):
c_{ij}

1

2

3

4

5

6

1

0

15750

–  –  –  – 
2

–  0  2625  –  –  – 
3

–  –  0  750  –  – 
4

–  –  –  0  1000  – 
5

–  –  –  –  0  5000 
6

–  –  –  –  0 
The rest of the table is diagonally filled following the rule described above:
c_{ij}

1

2

3

4

5

6

1

0

15750

7875  9375  11875  15125 
2

–  0  2625  4375  7125  10500 
3

–  –  0  750  2500  5375 
4

–  –  –  0  1000  3500 
5

–  –  –  –  0  5000 
6

–  –  –  –  –  0 
The result is obtained for i = 1 and j = 6, which is the element at the top right of the table. The minimum cost is 15125 operations. A new question arises: how was one proceeded to have a minimum number of calculations?
When we calculate the minimum cost for each box, we choose from two configurations. For example, to calculate c_{13}, we take the minimum between c_{12}*M_{3} and M_{1}*c_{23}. We note that choice to know the order of calculation of the matrix product.
K_{ij}

1

2

3

4

5

6

1

–

–

1  3  3  3 
2

–  –  –  3  3  3 
3

–  –  –  –  3  3 
4

–  –  –  –  –  5 
5

–  –  –  –  –  – 
6

–  –  –  –  –  – 
We read he table as follows: to calculate M_{i}*M_{j}, we set k = K_{ij} given by the array and then calculate M_{i}*…*M_{k} and M_{(k+1)}*…*M_{j}.
In our example, to calculate c_{16}, we calculate c_{13}*c_{46}; to calculate c_{13}, we calculate M_{1}*c_{23}; to calculate c_{46} we calculate c_{45}*M_{6}.
Most algorithms based on dynamic programming remember the choice made for each calculation. Often the result is not important, the path to reach is.