When the optimal basic solution of the linear problem is analyzed to answer questions about changes in its formulation, the study is called sensitivity analysis.
Post-optimization is the set of techniques that make it possible to obtain the optimum of the LP problem when some data have been modified.
We consider the problem of general linear programming in its standard form:
This study can be motivated by several reasons:
- the problem data is not known exactly, in which case it is important to determine how data affect the proposed solution;
- we want to evaluate the consequences of a change of policy that would modify the data of the problem.
This course will not give a formal mathematical explanation. Marginal, reduced costs and sensitivity analysis can be obtained by many software programs when solving the simplex. We will only give examples to understand their usefulness.
If the problem is to transform goods to sell a production with a better profit and the maximum increase in income that results from the possibility of having an additional unit of one of the goods, is the marginal value of that good . Very often, the term marginal cost is also used here.
If a slack variable is not zero, in the optimal solution is that the corresponding property is already in surplus. Therefore, having an additional unit of this property will have no influence on income. The slack variable has a zero marginal value.
On the other hand, if a slack variable is zero in the optimal solution is that the corresponding property is fully utilized. Subsequently, a change in availability will generally have an influence on income. That is why this slack variable (equals to zero) in the optimal solution has a non-zero marginal value, and this marginal value specifies the variation of the economic function resulting from the use of an additional unit of the associated good.
with as solution vector x* = (2,6). Here we show the values of Z that’s why the value for an optimal solution are positive.
One can measure the sensitivity of the optimal solution to a change of a right-line term or a coefficient in the objective.
Study 1: Variation in the objective function
The variation of a coefficient in the objective function over a certain interval does not lead to a modification of the optimal solution. Outside this interval, we have a new solution which remains itself optimal on another interval. We can thus highlight a finite number of intervals of variation for cj, with on each of them an invariant solution.
Let’s change the objective function by max z’ =4*x1 + 5*x2. The value of the objective function will change by x*1 = 2, while the solution vector will not be modified as shown by the graphic resolution:
Similarly, if c1 goes from 3 to 2, only the value of the objective function will be modified. To calculate the interval over which the coefficient x*1 is valid, we must change the objective function until it is parallel to the other constraints.
When the linear problem have a large dimension, it is possible to know the variation of price thanks to the simplex resolution by adding a delta in the objectif function:
La solution reste optimal tant que la ligne du Z a des nombres négatifs donc si :
Study 2: variation in the second member
When the second member of a constraint varies (in a certain interval), if this constraint was not saturated, then the solution does not change and the optimal value of the objective function either. This is obvious since the optimal solution not checking with equality constraint, can be varied (a little) the second member without « touching » the optimal solution.
On the other hand, if the constraint was checked with equality at the optimum, we have a range of variation for the second member such as:
- The solution changes but the null variables remain null and the non-zero variables remain non-zero: the structure of the solution does not change.
- The variation of the second member i causes a variation of the optimal value of the objective function equal to ui*di, therefore proportional to di.
If we go out of the gap, we have a new dual cost. It is thus possible to highlight a finite number of variation intervals for the second member with, on each of them, a value for the dual cost. On the different intervals, the sensitivity analysis does not give the optimal solution since the numerical values of the variables depend on the exact value of the second member.
Consider in the example that b1=4 becomes b’1=5. Let’s make a graphical resolution of the new problem:
What about the decrease of b1? As the explanation for marginal costs explains, the decrease of 1 in the second member leads to a decrease in the value of the objective function by an amount equal to the marginal cost. So the decrease of 1 will not result in a change.
To know the possibilities of evolution of the stock without changing the value of the optimal solution, it is necessary to add a delta in the second member studied as the following example shows:
The solution remains optimal as long as the simplex is not degenerate, ie the second member is positive:
Consider now an increase of the third second member to b’3 = 19. Since the marginal cost is not zero, the optimal solution will be modified as shown in the graphical resolution.
Study 3: Variation of non-basic variables
Let’s go back to the previous example with a new constraint and a new variable:
We have the following reduced cost: d3 = -2. This means that building a unit of x3 would decrease the value of the objective function by 2 (since it is off base x*3 = 0).
Study 4 : Variation of production
Indeed by adding a variable delta in the cost of the variable in base of the constraint aimed then it is enough to inject the optimal vector and to solve the equation as in the following example: