Stochastic hill climbing algorithm

The strategy of the Stochastic Hill Climbing algorithm is iterate the process of randomly selecting a neighbor for a candidate solution and only accept it if it results in an improvement. The strategy was proposed to address the limitations of deterministic hill climbing techniques that were likely to get stuck in local optima due to their greedy acceptance of neighboring moves.

Stochastic Hill Climbing was designed to be used in discrete domains with explicit neighbors such as combinatorial optimization (compared to continuous function optimization). The algorithm’s strategy may be applied to continuous domains by making use of a step-size to dene candidate-solution neighbors (such as Localized Random Search and Fixed Step-Size Random Search).

Stochastic Hill Climbing is a local search technique (compared to global search) and may be used to rene a result after the execution of a global search algorithm. Even though the technique uses a stochastic process, it can still get stuck in local optima. Neighbors with better or equal cost should be accepted, allowing the technique to navigate across plateaus in the response surface.

The algorithm can be restarted and repeated a number of times after it converges to provide an improved result (called Multiple Restart Hill Climbing). The procedure can be applied to multiple candidate solutions concurrently, allowing multiple algorithm runs to be performed at the same time (called Parallel Hill Climbing).

hillclimbing