Discrete time Markov chain

Basics

When we are dealing with a random phenomenon, we notice that the future is dependent only on the present.

Let (Xn) be a sequence of random variables with values in a finite set of J states, Xt=j is the state of the system at time t. We say that Xn is a Markov chain if for any n, for any i0, …, in+1:

P(X(n+1)=i(n+1) | Xn =in,…,X0=i0) = P(X(n+1) = i(n+1) | Xn = in)

Such a process is said without memory. The value of this probability is noted pn(n+1).

We note that X0 is not fixed by the definition, this law is called the initial law. The vector of initial probabilities is noted π, with πj=P(S0=j) with j included in the finite set and the sum of πj=1.

The vector of the transition probabilities is denoted vi (pi0,…,pij) with j included in the finite set and the sum of pij=1.

The matrix of transition probabilities is the concatenation of transition probability vectors. All the terms are positive or null, the sum of the terms on a line is equal to 1. The powers of a transition matrix (or stochastic matrix) are stochastic matrices.

A Markov chain is said to be homogeneous in time if the transition probabilities are not affected by a translation in time. That is, it does not depend on n. Transition probabilities remain stationary over time.

Let’s take an example. As long as a player has money, he plays by betting £1. He gains £1  with a probability of p and loses his bet with a probability (1-p) with p between 0 and 1. The game stops when he has £3.

We can define four states: 0, 1, 2, 3, representing the money he has. The transition matrix is as follows:

markov2

Graph

The graph associated with a Markov process is formed by the vertices representing the states of the process of the finite set, and arcs corresponding to the transitions possible. pij.

markovchain

markov4

Note Q the transition matrix. A sequence of states (x1, x2, . . . , xm) defines a path of length m from x1 to xm in the graph associated with the homogeneous Markov chain if and only if Q(x1, x2)Q(x3, x4) . . .Q(xm-1, xm) > 0.

The probability of being in a state j from a state i after n iteration amounts to multiplying the transition matrix Qn by the initial vector. The answer is then Qn(i,j).

Reduced graphs

A state j is accessible from a state i if there is a strictly positive probability of reaching state j from state i into a finite transition number. From a graph theory point of view, j is accessible from a state i if there is a path between i and j.

If the state j is accessible from the state i and, conversely, the state i is accessible from the state j, then we say that the states i and j communicate. This results in the fact that i and j are on the same circuit.

A reduced graph is a partition of a Markov chain into equivalence classes such that all states of a class communicate with each other.

The equivalence classes are as follows:

  • a class is called transitory if it is possible to leave it but in this case, the process will never be able to return to it;
  • a class is said to be recurrent or persistent if it is impossible to leave it. If a recurring class is composed of a single state, it is said to be absorbing.

markov1

If the partition in equivalence classes induces only one reccurent class, the Markov chain is said to be irreducible. A Markov chain has at least one recurrent class.

Example

We are interested in the development of a natural forest in a temperate region on a parcel. Our model has 3 states. State 1 is that of vegetation consisting of grasses or other species with a low carbon footprint; State 2 corresponds to the presence of shrubs whose rapid development requires maximum sunshine and whose carbon yield is maximum, and the state 3 of larger trees that can develop in a semi-sunny environment (considered as a forest). If we respectively denote h, a, f these three states (h for hay, a for average, f for forest), the set of possible states for a given point of this parcel is the set s = {h, a, f}. On the plot, a large number of points on the ground are located on a regular grid and the state of the vegetation in each of these points is recorded at a fixed time interval. This type of program is called a cellular automaton.

By observing the evolution during a time interval, it is possible to determine for each state i∈S the proportion of points which are passed to the state j∈S, and to note pij that proportion. If the different proportions (there are 9) evolve little from one time interval to the next, we can assume them unchanged over time and we can look like the probabilities for any point to pass from the state i in the state j during a time interval. Suppose for example that in this parcel, these probabilities are the following:

proba31

If X0 denotes the state of a point at time t = 0 and X1 the state of the same point in t = 1, we have for example the probability of transition from the shrub state to t = 0 to forest state in t = 1 is written P(X1=f : X0=a) is worth 0, 4.

The set of states S and the transition matrix P constitute an example of a Markov chain. We can also represent this Markov chain by the following graph:

proba32

In this model, we can calculate the probability of any succession of states, called trajectory in the Markov chain. For example, the probability that at a point in the parcel we observe the succession of states (h, h, a, f, f) is calculated as follows:

proba33

where π0 is the probability of being in the state at the initial moment t = 0.

The observation of the state in which the different points of the parcel are located at the initial moment t0 makes it possible to determine the initial proportions of each of the 3 states. For this, we note for each point the state in which it is and calculate the proportion of points of each of the possible states. Each proportion can be seen as the probability for a point in the parcel to be in one of the states at the initial time. Thus, if we have for example π0 = (0.5, 0.25, 0.25), it means that half of the points of the parcel are initially in the state h, a quarter in the state a and a quarter in the state f. But we can also interpret this by considering that any state has a 50% chance of being in the state h, 25% of being in the state a and 25% of the state f. For this reason, the proportion of individuals in the study population in each state,

proba34

is called the initial probability law or the initial distribution. When choosing a Markov chain model, the goal is often to determine the evolution of the distribution of states over time. For example, if the parcel considered above is covered for one-third of forest at the initial moment, this proportion will grow, tending towards 100%, on the contrary tending towards zero or approaching a limit value kind of ecological balance?

We will see that if we know the initial distribution we can calculate the distribution at time t = 1, then at time t = 2 and so on. Let’s calculate for t = 1:

proba35

It is deduced that π1(h) is the scalar product of the vector π0 with the first column of the matrix P. Similarly, it is verified that π1(a) is the dot product of the vector with the second column of the matrix P and that π1(f) is the scalar product of the vector with the third column of the matrix P. We summarize this: π10P.

proba36

 

 

Publicités