## Basics

When we are dealing with a random phenomenon, we notice that the future is dependent only on the present.

Let (X_{n}) be a sequence of random variables with values in a finite set of J states, X_{t}=j is the state of the system at time t. We say that X_{n} is a Markov chain if for any n, for any i_{0}, …, i_{n+1}:

P(X_{(n+1)}=i_{(n+1)} | X_{n} =i_{n},…,X_{0}=i_{0}) = P(X_{(n+1)} = i_{(n+1)} | X_{n} = i_{n})

Such a process is said without memory. The value of this probability is noted p_{n(n+1)}.

We note that X_{0} is not fixed by the definition, this law is called the initial law. The vector of initial probabilities is noted **π**, with **π**_{j}=P(S_{0}=j) with j included in the finite set and the sum of **π**_{j}=1.

_{i}(p

_{i0},…,p

_{ij}) with j included in the finite set and the sum of p

_{ij}=1.

The matrix of transition probabilities is the concatenation of transition probability vectors. All the terms are positive or null, the sum of the terms on a line is equal to 1. The powers of a transition matrix (or stochastic matrix) are stochastic matrices.

Let’s take an example. As long as a player has money, he plays by betting £1. He gains £1 with a probability of p and loses his bet with a probability (1-p) with p between 0 and 1. The game stops when he has £3.

We can define four states: 0, 1, 2, 3, representing the money he has. The transition matrix is as follows:

## Graph

_{ij}.

Note Q the transition matrix. A sequence of states (x_{1}, x_{2}, . . . , x_{m}) defines a path of length m from x_{1} to x_{m} in the graph associated with the homogeneous Markov chain if and only if Q(x_{1}, x_{2})Q(x_{3}, x_{4}) . . .Q(x_{m-1}, x_{m}) > 0.

The probability of being in a state j from a state i after n iteration amounts to multiplying the transition matrix Q^{n} by the initial vector. The answer is then Q^{n}(i,j).

## Reduced graphs

A state j is accessible from a state i if there is a strictly positive probability of reaching state j from state i into a finite transition number. From a graph theory point of view, j is accessible from a state i if there is a path between i and j.

A reduced graph is a partition of a Markov chain into equivalence classes such that all states of a class communicate with each other.

The equivalence classes are as follows:

- a class is called transitory if it is possible to leave it but in this case, the process will never be able to return to it;
- a class is said to be recurrent or persistent if it is impossible to leave it. If a recurring class is composed of a single state, it is said to be absorbing.

If the partition in equivalence classes induces only one reccurent class, the Markov chain is said to be irreducible. A Markov chain has at least one recurrent class.

## Example

We are interested in the development of a natural forest in a temperate region on a parcel. Our model has 3 states. State 1 is that of vegetation consisting of grasses or other species with a low carbon footprint; State 2 corresponds to the presence of shrubs whose rapid development requires maximum sunshine and whose carbon yield is maximum, and the state 3 of larger trees that can develop in a semi-sunny environment (considered as a forest). If we respectively denote h, a, f these three states (h for hay, a for average, f for forest), the set of possible states for a given point of this parcel is the set s = {h, a, f}. On the plot, a large number of points on the ground are located on a regular grid and the state of the vegetation in each of these points is recorded at a fixed time interval. This type of program is called a cellular automaton.

By observing the evolution during a time interval, it is possible to determine for each state i∈S the proportion of points which are passed to the state j∈S, and to note p_{ij} that proportion. If the different proportions (there are 9) evolve little from one time interval to the next, we can assume them unchanged over time and we can look like the probabilities for any point to pass from the state i in the state j during a time interval. Suppose for example that in this parcel, these probabilities are the following:

If X_{0} denotes the state of a point at time t = 0 and X_{1} the state of the same point in t = 1, we have for example the probability of transition from the shrub state to t = 0 to forest state in t = 1 is written P(X_{1}=f : X_{0}=a) is worth 0, 4.

The set of states S and the transition matrix P constitute an example of a Markov chain. We can also represent this Markov chain by the following graph:

In this model, we can calculate the probability of any succession of states, called trajectory in the Markov chain. For example, the probability that at a point in the parcel we observe the succession of states (h, h, a, f, f) is calculated as follows:

where π_{0} is the probability of being in the state at the initial moment t = 0.

The observation of the state in which the different points of the parcel are located at the initial moment t_{0} makes it possible to determine the initial proportions of each of the 3 states. For this, we note for each point the state in which it is and calculate the proportion of points of each of the possible states. Each proportion can be seen as the probability for a point in the parcel to be in one of the states at the initial time. Thus, if we have for example π0 = (0.5, 0.25, 0.25), it means that half of the points of the parcel are initially in the state h, a quarter in the state a and a quarter in the state f. But we can also interpret this by considering that any state has a 50% chance of being in the state h, 25% of being in the state a and 25% of the state f. For this reason, the proportion of individuals in the study population in each state,

is called the initial probability law or the initial distribution. When choosing a Markov chain model, the goal is often to determine the evolution of the distribution of states over time. For example, if the parcel considered above is covered for one-third of forest at the initial moment, this proportion will grow, tending towards 100%, on the contrary tending towards zero or approaching a limit value kind of ecological balance?

We will see that if we know the initial distribution we can calculate the distribution at time t = 1, then at time t = 2 and so on. Let’s calculate for t = 1:

It is deduced that π_{1}(h) is the scalar product of the vector π_{0} with the first column of the matrix P. Similarly, it is verified that π_{1}(a) is the dot product of the vector with the second column of the matrix P and that π_{1}(f) is the scalar product of the vector with the third column of the matrix P. We summarize this: π_{1}=π_{0}P.