# Combinatorial optimization

## Non-exact methods:

• Stochastic algorithms
• Evolutionary algorithms
• Physical algorithms
• Probabilistic algorithms
• Swarm algorithms
• Immune algorithms
• Neural algorithms

## Introduction

Combinatorial optimization, also called discrete optimization, is a branch of optimization in applied mathematics and computer science, also related to operational research, algorithmic and complexity theory. Combinatorial optimization consists in finding in a set a subset containing the « best solutions ».

Finding an optimal solution in a discrete and finished set is an easy problem in theory: just try all the solutions, and compare their qualities to see the best. However, the combinatorial explosion of possible solutions of certain mathematical problems does not make it possible to obtain a solution in a « human » time.

Some combinatorial optimization problems can be solved (exactly) in polynomial time for example by a dynamic programming algorithm or by showing that the problem can be formulated as a linear optimization problem in real variables. In most cases, the problem is Np-difficult and, to solve it, it is necessary to use appropriate algorithms.

In practice, the physically acceptable complexity is often polynomial. We then content ourselves with having an approximate solution, obtained by a heuristic or a meta-heuristic. It is important to note that some of these methods provide global optimals, the difference with the exact methods is the fact of not having any formal proof (mathematical proof or finality) of its overall optimality. ## Tree of solutions

Let E be the set of problem’s solutions. It is supposed to be discrete and finite. The enumeration of the solutions is represented in a tree. All elements of E are separated into n non-empty disjoint subsets, each containing a part of the set of solutions. For example, the set can be splited in two for the knapsack’s problem: do we take or not an element xk in the bag.

The operation can be repeated for each subset until each set contains only one element. For the example of the knapsack, each subset is separated until reaching the decision on the last element. # How to find a solution

## Exact method

These methods provide a guarantee of finding the optimal solution for a finite size instance in a limited time and to prove its optimality.

## Heuristic method

When the complexity of a problem is exponential or has a combinatorial explosion, the use of heuristics is recommended. This is a method that quickly provides a « good » solution to the problem. This approximate solution can then provide a starting point for the use of an exact method (such as the North West corner for the transport problem). All greedy and naive algorithms are heuristics.

We note that heuristics are constructed in order to solve a given problem and cannot be used in other conditions similar to metaheuristics. The heuristic is calculated according to three criteria:

1. Quality of the result: the heuristic is confronted with the optimal results for a set of values of the given problem (from benchmark). The quality of the solution is either a distance to the optimal solution, or a probability of reaching it.
2. Cost of the heuristic: complexity in time and space.
3. Scope: the field of eligibility of all variables.

The number of heuristics for a given problem are numerous, so it is important to provide a fast one and which provides « good » results.

## Metaheuristic method

Meta-heuristics have a higher level of abstraction than heuristics because it is a method that can be adapted to a given problem. Thus, the methods can be applied to various problems (in the form of a black box) without modifying their operation.

There are two kinds of meta-heuristics: population (a) or path (b). Most algorithms do not have a specific population, so it is possible to use the algorithm for a path or a population. Both kinds work with the same process in four steps:

1. Neighborhood: the neighborhood of a solution is a subset of solutions achievable by a transformation of the initial solution (by permutation, by extension, by mutation, by ejection, etc.).
2. Exploration: the exploration consists of collecting data on the entire neighborhood.
3. Exploitation: the exploitation uses the harvested information to define the « interesting » zones of the research space formed by the neighborhood.
4. Memory: the memory takes into account the learning and makes it possible to determine the zones likely to have a global optimum. If the new solutions or the stopping criteria no longer make it possible to improve the solution, the algorithm stops. Otherwise, go back to step 1. Some algorithm only work with stop criteria, so we’re talking about meta-heuristics without memory. Publicités