*Learning objectives*

The classes `markov_chain` and `dynamic_markov_chain` |

Antje, Alexandra, Stefan, and Joachim are playing with a ball, which they throw to one another, see Figure 5.35. Antje likes women twice as much as men and therefore throws the ball to Alexandra with twice the probability that she throws it to Stefan and Joachim. Alexandra does not like Antje and likes Stefan three times as much as she likes Joachim. Stefan likes only men and therefore immediately throws the ball to Joachim, but never to a woman. Joachim likes all of his friends evenly and therefore distributes his favor to all of the three others with an equal probability. They play this game for many days, making great many throws. The question is how often, in the long run, each one receives the ball.

In such an experiment, a sequence of random states (random
variables) is created. Each state z_{n}
originates from a finite set of possible values
{a_{1},...,a_{m}}. The
probability for state z_{n} to have a value of
a_{j} only depends on its predecessor state
z_{n-1}. Such a sequence of random variables
is called a *Markov
chain*. It can be proven that the relative frequencies
of the values a_{i} that are produced in
such an experiment converge to a limit. So the quotient of the
number of how often Antje has the ball and the overall number of
throws converges.

Such experiments can be performed with the help of the class
* markov_chain*. An
object of type

`markov_chain`

has an
underlying graph in which each edge e has a corresponding
non-negative integer weight w(e). For each node (with at least on
leaving edge), the overall weight of the leaving edges must be
positive. A random walk in
such a graph starts at a start node zThe class `markov_chain`

allows to
perform an arbitrary number n of steps and then to query the
current node (state) z_{n}. In addition, it
can be queried which node has been visited how often. Then further
steps can be performed.

The following program implements the Markov chain from Figure 5.35, performs 1,000,000 steps, and then prints the absolute and relative frequencies gained.

#include <LEDA/graph.h> #include <LEDA/markov_chain.h> #include <LEDA/edge_array.h> #include <LEDA/node_array.h> #include <LEDA/string.h> using leda::graph; using leda::markov_chain; using leda::node; using leda::edge; using leda::node_array; using leda::edge_array; using leda::string; using std::cout; int main() { graph G; node Antje = G.new_node(); node Alexandra = G.new_node(); node Stefan = G.new_node(); node Joachim = G.new_node(); node_array<string> name(G); name[Antje] = "Antje"; name[Alexandra] = "Alexandra"; name[Joachim] = "Joachim"; name[Stefan] = "Stefan"; edge An1 = G.new_edge(Antje, Alexandra); edge An2 = G.new_edge(Antje, Joachim); edge An3 = G.new_edge(Antje, Stefan); edge Al1 = G.new_edge(Alexandra, Stefan); edge Al2 = G.new_edge(Alexandra, Joachim); edge Jo1 = G.new_edge(Joachim, Alexandra); edge Jo2 = G.new_edge(Joachim, Antje); edge Jo3 = G.new_edge(Joachim, Stefan); edge St1 = G.new_edge(Stefan, Joachim); edge_array<int> weight(G); weight[An1] = 2; weight[An2] = 1; weight[An3] = 1; weight[Al1] = 3; weight[Al2] = 1; weight[Jo1] = 1; weight[Jo2] = 1; weight[Jo3] = 1; weight[St1] = 1; markov_chain M(G, weight, Stefan); const int num_of_steps = 1000000; M.step(num_of_steps); cout << "Current node after " << num_of_steps << " steps is "; cout << name[M.current_node()] << ".\n"; cout << "Number of visits:\n"; node v; forall_nodes(v, G) { cout << name[v] << ": " << M.number_of_visits(v); cout << ". In %: " << M.rel_freq_of_visit(v) << "\n"; // Warning: "visit" not "visits"! } }

A run of this program produced the following output:

Current node after 1000000 steps is Alexandra. Number of visits: Antje: 127410. In %: 0.12741 Alexandra: 190703. In %: 0.190703 Stefan: 301057. In %: 0.301057 Joachim: 380830. In %: 0.38083

As arguments, the constructor of
`markov_chain`

expects the underlying graph,
an `edge_array<int>`

that stores the edge
weights, and the start node that the random walk is to start
from.

A call of the method

M.step(num_of_steps);

performs `num_of_steps`

steps of the random walk.

The absolute frequency with which a node
`v`

has been visited in this walk can be queried
with

M.number_of_visits(v);

M.rel_freq_of_visit(v);

the relative frequency (with respect to the overall number of steps performed).

The class
* dynamic_markov_chain*
differs from the class

`markov_chain`

by
allowing to modify the edge weights dynamically, that is, also
after the initialization.A call of

M.set_weight(edge e, int w);

Further information can be found on the manual
pages of the classes `markov_chain`

and `dynamic_markov_chain`

.