Ant colony-based algorithms solve static problems and offer a high degree of flexibility and robustness in dynamic environments. The colony adapts to sudden changes in the environment and continues to function when some individuals fail to perform their task.

The principle is simple. Ants are able to select the shortest way to get from the nest to a food source by depositing and tracking pheromone trails. These are deposited by the ants on their way. Ants tend to follow the trail with the highest pheromone impact. Pheromones dissipate over time (not in all versions of the algorithm). It is therefore concluded that after a while, the ants will all choose the shortest track between the nest and the food.

# Ant colony optimization

The algorithm was introduced by Mr. Dorigo in 1991, initially planned for the TSP (commercial traveler). The problem to be solved is the search for a shorter path in a graph.

The structure of the algorithm is as follows:

- To initialize
- To repeat up to max cycles
- Every ants builds a solution
- To improve solutions by local search
- Rewarding the best solutions by adding pheromone
- To evaporate the traces of pheromone

ACO type algorithms have convergence evidence that will not be shown in this course.

## Initialization (for TSP)

The m ants are randomly distributed over the n cities. Each ant, a taboo list is assigned, it contains its starting vertex. The pheromone tracks are initialized to a non-zero positive c value. Each edge of the graph ij has its pheromone track noted *T _{ij}=c*.

## Solution by local search

An ant k placed on a city i at the moment t chooses its city of destination j according to:

- The visibility of this city
*n*(inverse of distance between cities for TSP)_{ij} - The amount of pheromone
*T*on the path._{ij}(t)

The choice is random according to a probability where two parameters a and b control the relative importance to the visibility and the quantity of pheromones. If b = 0, the nearest cities are more likely to be selected, this is a greedy algorithm. If a = 0, only the amplification of the pheromones acts, it has premature convergence and not optimality of the results. The new city is added to the taboo list of the ants.

Each ants proceeds in this way to travel between all the cities.

## And of an iteration

For each ants k, we compute the length of its path *L _{k}(t)* then we empty its taboo list (except initialization).

Then the pheromone tracks are updated:

*T _{ij}(t+1)=p*T_{ij}(t)+pheromones_{ij}(t)* with

*p*between 0 and 1, we ntoed

*pheromones*the amount of deposited pheromone by the ants on the edge ij in the cycle t.

_{ij}(t)The evaporation of the track is calculated directly in the previous formula. Indeed p represents the persistence of the track, that is to say the memory of the amount of pheromones that we keep for the next cycle. If p = 1, there is no evaporation and therefore no limitation of the autocatalytic phenomenon. If p = 0, the system is without memory, it takes into account only the last cycle performed.

For each ants k passing through the edge ij, it increments the quantity of pheromone deposited according to the following calculation:

*pheromones _{ij}(t) += Q/L_{k}(t)* where

*Q*represents a pheromone quota assigned to each ants (often Q = 100). The idea is to supply all the way traveled by an ants in a homogeneous way in pheromones.

We keep in memory the smallest path if it is better than the last memorize. The ants begin a new iteration from their original city (kept in their taboo list).

# MMAS

Max-min ant system (MMAS) is an efficient variant of the algorithm. Only the best ants give pheromones which is limited by an upper and a lower bound. A track is prevented from being too much reinforced and the possibility of exploring any track is allowed. This avoids premature convergence.