Dynamic programming is used to solve many problems from operational research, so we will not list tutorials related to this method.
Introduction
Dynamic Programming is an exact method for solving optimization problems, mainly due to R. Bellman (1957). Specifically, dynamic programming is an algorithm design paradigm that can be applied to solve a problem if it has Bellman’s optimality.
Dynamic programming is similar to the divide and rule method. The significant difference between these two methods is that dynamic programming allows subproblems to overlap. In other words, a subproblem can be used in the solution of two different subproblems. While the divide and conquer approach creates separate subproblems that can be solved independently of one another.
A second difference between these two methods is, as illustrated by the figure above, is that the divide and conquer method is recursive, the recursion takes the global problem to break it into an elementary problem. Dynamic programming is a method whose calculations are done from the bottom up: we start by solving the smaller subproblems. By combining their solution, we get the solutions of subproblems bigger and bigger.
Principle
The paradigm has four issues:
 Characterization of the structure of an optimal solution.
 Recursive definition of the value of the optimal solution.
 Bottomup calculation.
 Construction of the solution from the ascending calculation.
A table is constructed to memorize the calculations already made: to each element will correspond the solution of one and only one intermediate problem, and one for the final solution. We have to determine the subissues to be addressed during the calculation.
There are two approaches to completing the table:
 Iterative: One initializes the « elements » corresponding to the basic cases.
We then fill the table according to a very specific order to be determined: we start with the smallest problems of « size », we end with the solution of the main problem: it is necessary that at each calculation, we use only the already calculated solutions.  Recursive: The same principle as the iterative approach, this method will only calculate what is strictly necessary to achieve the given objective.
Example : matrix product
Let M_{1}, …, M_{n }be n matrix , each matrix have m_{i} rows and m_{i+1} columns, the inputs are real numbers (linear problem). We want to compute M_{1}*…*M_{n} with a minimal number of time complexity.
Let c_{ij} be the time complexity to compute M_{i}*…*M_{j}. Then we have c_{ii}=0 and c_{i(i+1)}=m_{(i1)}*m_{i}*m_{(i+1)}. Let’s split this subproblem by calculating the best c_{ij} with M_{i}*…*M_{k} andM_{(k+1)}*…*M_{j}. Then: c_{ij} = min [c_{ik} + c_{(k+1)j} + m_{i}*m_{(k+1)}*m_{(j+1)}] with k from i to j1, the last term needs to multiplying the results of the matrix product from i to k with that of k+1 to j.
We have the following program:
 c_{ij} = 0 if i = j;
 c_{i(i+1)}=m_{(i1)}*m_{i}*m_{(i+1)};
 c_{ij} = min [c_{ik} + c_{(k+1)j} + m_{i}*m_{(k+1)}*m_{(j+1)}] otherwise.
The table of the dynamic program takes as input the number of operations carried out according to the matrices chosen. Take the following example:
i

1

2

3

4

5

6

7

m_{i}

30

35

15

5

10

20

25

The initial table is as follows:
c_{ij}

1

2

3

4

5

6

1

0

–

–  –  –  – 
2

–  0  –  –  –  – 
3

–  –  0  –  –  – 
4

–  –  –  0  –  – 
5

–  –  –  –  0  – 
6

–  –  –  –  –  0 
Then we can calculate c_{ij }with two matrices (without substructure principle):
c_{ij}

1

2

3

4

5

6

1

0

15750

–  –  –  – 
2

–  0  2625  –  –  – 
3

–  –  0  750  –  – 
4

–  –  –  0  1000  – 
5

–  –  –  –  0  5000 
6

–  –  –  –  0 
The rest of the table is diagonally filled following the rule described above:
c_{ij}

1

2

3

4

5

6

1

0

15750

7875  9375  11875  15125 
2

–  0  2625  4375  7125  10500 
3

–  –  0  750  2500  5375 
4

–  –  –  0  1000  3500 
5

–  –  –  –  0  5000 
6

–  –  –  –  –  0 
The result is obtained for i = 1 and j = 6, which is the element at the top right of the table. The minimum cost is 15125 operations. A new question arises: how was one proceeded to have a minimum number of calculations?
When we calculate the minimum cost for each box, we choose from two configurations. For example, to calculate c_{13}, we take the minimum between c_{12}*M_{3} and M_{1}*c_{23}. We note that choice to know the order of calculation of the matrix product.
K_{ij}

1

2

3

4

5

6

1

–

–

1  3  3  3 
2

–  –  –  3  3  3 
3

–  –  –  –  3  3 
4

–  –  –  –  –  5 
5

–  –  –  –  –  – 
6

–  –  –  –  –  – 
We read he table as follows: to calculate M_{i}*M_{j}, we set k = K_{ij} given by the array and then calculate M_{i}*…*M_{k} and M_{(k+1)}*…*M_{j}.
In our example, to calculate c_{16}, we calculate c_{13}*c_{46}; to calculate c_{13}, we calculate M_{1}*c_{23}; to calculate c_{46} we calculate c_{45}*M_{6}.
Most algorithms based on dynamic programming remember the choice made for each calculation. Often the result is not important, the path to reach is.
Example: the coin changing problem
Suppose we need to make change for 67£. We want to do this using the fewest number of coins possible among: 1, 5, 10, 25.
If the following coins are available: 1, 5, 10, 25. It is easy to see the optimal solution 67=2*25+10+5+2*1. By repeatedly choosing the largest coin less than or equal to the remaining sum, the desired sum is obtained by a greedy algorithm.
The formal description of the Coin Changing problem is as follows: Let D={ d_{1},..,d_{k}} be a finite set of distinct coin denominations. We assume each d_{i }is an integer and the set is in increasing order. Each denomination is available in unlimited quantity. The problem is to make change for n£ using a minimum total number of coins, d_{k} =1 so there is always a solution.
The greedy method does not work in any case (it does not give an optimal solution). For example, D={25,10,1} and n=30. The greedy method would produce a solution: 25+5*1, which is not as good as 3*10.
Step 1: Characterize the substructure. Define C[j] to be the minimum number of coins we need to make change for j£. If we knew that an optimal solution for the problem of making change for j£ used a coin of denomination we would have C[j]=1+C[jdi].
Step 2: Define the value of an optimal solution. We can recursively define the value of an optimal solution from the equation found in step 1.
Step 3: Algorithm.
CoinChanging(n,d,k) C[0]=0; For j from 1 to n C[j]=inf; For i from 1 to k If j>=di and C[jdi]<C[j] then C[j]=1+C[jdi] Denom[j]=di Return C
We use an additional array Denom, where Denom[j] is the denomination of a coin used in an optimal solution to the problem of making change for j£. If the following coins are available: 1, 5, 10, 25:
j  0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15 
C[j]  0  1  2  3  4  1  2  3  4  5  1  2  3  4  5  2 
Denom[j]  0  1  1  1  1  5  1  1  1  1  10  1  1  1  1  5 
Given some n£, find all the combinations of coins that make up the value, the following coins are available: 1, 5, 10, 25. For example, for N=4, D={1,2,3} there are four solutions: {1,1,1,1}, {1,1,2}, {2,2} and {1,3}.
Step 1: The set of solutions for this problem, C(N,m) can be partitioned into two sets:
 There are those sets that do not contain any d_{m}
 Those sets that contain at least 1 d_{m}
If a solution does not contain d_{m}, then we can solve the subproblem of N with D={d_{1}, ..,d_{m1}}, or the solutions of C(N,m1).
If a solution does contain d_{m}, then we are using at least one d_{m}, thus we are now solving the subproblem of N d_{m} , with D={d_{1}, ..,d_{m}}. This is C(N d_{m}, m).
Step 2: Thus, we can formulate the following: C(Nm)=+ C(N d_{m}, m) with the base cases:
 C(N,m)=1, N=0
 C(N,m)=0, N<0 (negative sum of money)
 C(N,m)=0, N>=1, m<=0 (no change available).
Step 3: The algorithm is as follows:
n/d  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15 
1  1  1  1  1  1  1  1  1  1  1  1  1  1  1  1 
5  1  1  1  1  2  2  2  2  2  3  3  3  3  3  4 
10  1  1  1  1  2  2  2  2  2  4  4  4  4  4  6 
25  1  1  1  1  2  2  2  2  2  4  4  4  4  4  6 