Heuristic evaluation

The problem of evaluating heuristics is crucial. Indeed, heuristics offer no guarantee of optimality: they can find the optimum for some data, or be far from it.

Relative performance

Suppose we study a combinatorial problem for which we already have an exact (optimal) reference method. For a heuristic H and a data item we denote H (d) the cost of the heuristic solution and OPT (d) the optimal cost. Relative performance of H over d is the quotient: RH (d) = H(d) / OPT(d).

For a minimization problem, Rh (d) ≥ 1. We can also talk about distance to the optimum percentage using the formula: 100*(RH (d) – 1).. Relative performance can be bounded a priori (with the tests) or unpredictable a posteriori (after the tests).

A priori

The worst case performance ratio (PH) of a heuristic H is its worst relative performance over all possible data: PH = max{RH (d), for all valid d}

This is a guarantee of performance obtained by theoretical analysis of H. For this we limit RH(d) for any data d, then we build a data showing that this worst case can be effectively achieved. This case is very difficult to obtain, and is often not possible to calculate especially for all heuristics not having mathematical choice characters.

Let’s take an example of a minimization heuristic with PH = 1,5. This means that whatever the input data, the heuristic will never give a result of more than 50% above the global optimum. « never » since it has been proved mathematically.

A posteriori

When the evaluation is not possible, it is still possible to evaluate the results after execution of the heuristic H. We have seen the relative performance, but this method only applies if it is possible to know the global optimum.

However, we can obtain an evaluation if we have a default evaluation B(d) for the optimum OPT(d) with: RH(d) = H(d)/OPT(d) ≤ H(d)/B(d). In particular, if H(d) = B(d), then we have reached the global optimum. It is always possible to find a bound to the global optimum by naive or greedy method.

Statistical evaluation

A priori and a posteriori evaluations are still a problem: when are problems of very large size? The simplest evaluation is to compare solutions with those given in benchmarks or very widely used methods. The value of the solution, over a large set of tests, compared to known methods, gives a good approximation of the efficiency of the heuristic.

 

Publicités