5.5. Markov chains (classes markov_chain and dynamic_markov_chain)

Learning objectives

The classes markov_chain and dynamic_markov_chain

Antje, Alexandra, Stefan, and Joachim are playing with a ball, which they throw to one another, see Figure 5.35. Antje likes women twice as much as men and therefore throws the ball to Alexandra with twice the probability that she throws it to Stefan and Joachim. Alexandra does not like Antje and likes Stefan three times as much as she likes Joachim. Stefan likes only men and therefore immediately throws the ball to Joachim, but never to a woman. Joachim likes all of his friends evenly and therefore distributes his favor to all of the three others with an equal probability. They play this game for many days, making great many throws. The question is how often, in the long run, each one receives the ball.

Figure 5.35. A Markov chain

A Markov chain

Four persons are playing with a ball. If a player catches the ball, she or he immediately throws it on to a randomly selected person. The edges are labeled with transition probabilities. Thus a sequence of states results, a so-called Markov chain.

In such an experiment, a sequence of random states (random variables) is created. Each state zn originates from a finite set of possible values {a1,...,am}. The probability for state zn to have a value of aj only depends on its predecessor state zn-1. Such a sequence of random variables is called a Markov chain. It can be proven that the relative frequencies of the values ai that are produced in such an experiment converge to a limit. So the quotient of the number of how often Antje has the ball and the overall number of throws converges.

Such experiments can be performed with the help of the class markov_chain. An object of type markov_chain has an underlying graph in which each edge e has a corresponding non-negative integer weight w(e). For each node (with at least on leaving edge), the overall weight of the leaving edges must be positive. A random walk in such a graph starts at a start node z0 to be specified, and then successively performs steps according to the following rules: At the beginning, z0 is the current node. Let v be the current node at a certain point in time. If v has no leaving edges, no further step can be performed. If, in contrast, e0,...,ed are the edges leaving v and W is the overall weight of these edges, a step along ei is made with a probability of w(ei)/W. The target node of the traversed edge becomes the new current node.

The class markov_chain allows to perform an arbitrary number n of steps and then to query the current node (state) zn. In addition, it can be queried which node has been visited how often. Then further steps can be performed.

The following program implements the Markov chain from Figure 5.35, performs 1,000,000 steps, and then prints the absolute and relative frequencies gained.

Filename: MarkovChain.C
LEDA users as of version 5.0: Note that header files are now partitioned into modules (subfolders)!
#include <LEDA/graph.h>
#include <LEDA/markov_chain.h>
#include <LEDA/edge_array.h>
#include <LEDA/node_array.h>
#include <LEDA/string.h>

using leda::graph;
using leda::markov_chain;
using leda::node;
using leda::edge;
using leda::node_array;
using leda::edge_array;
using leda::string;
using std::cout;

int main()
  graph G;

  node Antje = G.new_node();
  node Alexandra = G.new_node();
  node Stefan = G.new_node();
  node Joachim = G.new_node();

  node_array<string> name(G);
  name[Antje] = "Antje";
  name[Alexandra] = "Alexandra";
  name[Joachim] = "Joachim";
  name[Stefan] = "Stefan";

  edge An1 = G.new_edge(Antje, Alexandra);
  edge An2 = G.new_edge(Antje, Joachim);
  edge An3 = G.new_edge(Antje, Stefan);
  edge Al1 = G.new_edge(Alexandra, Stefan);
  edge Al2 = G.new_edge(Alexandra, Joachim);
  edge Jo1 = G.new_edge(Joachim, Alexandra);
  edge Jo2 = G.new_edge(Joachim, Antje);
  edge Jo3 = G.new_edge(Joachim, Stefan);
  edge St1 = G.new_edge(Stefan, Joachim);

  edge_array<int> weight(G);

  weight[An1] = 2;
  weight[An2] = 1;
  weight[An3] = 1;
  weight[Al1] = 3;
  weight[Al2] = 1;
  weight[Jo1] = 1;
  weight[Jo2] = 1;
  weight[Jo3] = 1;
  weight[St1] = 1;

  markov_chain M(G, weight, Stefan);

  const int num_of_steps = 1000000;


  cout << "Current node after " << num_of_steps << " steps is ";
  cout << name[M.current_node()] << ".\n";

  cout << "Number of visits:\n";
  node v;
  forall_nodes(v, G) {
    cout << name[v] << ": " << M.number_of_visits(v);
    cout << ". In %: " << M.rel_freq_of_visit(v) << "\n"; 
    // Warning: "visit" not "visits"!

A run of this program produced the following output:

Current node after 1000000 steps is Alexandra.
Number of visits:
Antje: 127410. In %: 0.12741
Alexandra: 190703. In %: 0.190703
Stefan: 301057. In %: 0.301057
Joachim: 380830. In %: 0.38083

As arguments, the constructor of markov_chain expects the underlying graph, an edge_array<int> that stores the edge weights, and the start node that the random walk is to start from.

A call of the method


performs num_of_steps steps of the random walk.

The absolute frequency with which a node v has been visited in this walk can be queried with


and with


the relative frequency (with respect to the overall number of steps performed).

The class dynamic_markov_chain

The class dynamic_markov_chain differs from the class markov_chain by allowing to modify the edge weights dynamically, that is, also after the initialization.

A call of

M.set_weight(edge e, int w);

sets the weight of edge e to a value of w.

Further information

Further information can be found on the manual pages of the classes markov_chain and dynamic_markov_chain.