The problem of evaluating heuristics is crucial. Indeed, heuristics offer no guarantee of optimality: they can find the optimum for some data, or be far from it.

## Relative performance

_{H}(d) = H(d) / OPT(d).

For a minimization problem, R_{h} (d) ≥ 1. We can also talk about distance to the optimum percentage using the formula: 100*(R_{H} (d) – 1).. Relative performance can be bounded a priori (with the tests) or unpredictable a posteriori (after the tests).

## A priori

_{H}= max{R

_{H}(d), for all valid d}

This is a guarantee of performance obtained by theoretical analysis of H. For this we limit R_{H}(d) for any data d, then we build a data showing that this worst case can be effectively achieved. This case is very difficult to obtain, and is often not possible to calculate especially for all heuristics not having mathematical choice characters.

Let’s take an example of a minimization heuristic with P_{H} = 1,5. This means that whatever the input data, the heuristic will never give a result of more than 50% above the global optimum. « never » since it has been proved mathematically.

## A posteriori

When the evaluation is not possible, it is still possible to evaluate the results after execution of the heuristic H. We have seen the relative performance, but this method only applies if it is possible to know the global optimum.

## Statistical evaluation

A priori and a posteriori evaluations are still a problem: when are problems of very large size? The simplest evaluation is to compare solutions with those given in benchmarks or very widely used methods. The value of the solution, over a large set of tests, compared to known methods, gives a good approximation of the efficiency of the heuristic.