*Learning objectives*

The class `dynamic_random_variate` |

A reasonable application of the method `resize()` of the class `array` |

The loaded dice from the previous section may have brought us luck at cards, perhaps, but it can never grant us luck in love, though. If we are in this awkward situation, we should not try, under no circumstances, to drown our sorrow by alcoholic drinks, because then our way home might look like in Figure 2.20.

What we see here is the more or less *random walk* of a drunken
person. With every step forward the drunken person makes one
step to the left or to the right at random, staggering to his
rescuing resting-place in this manner. We want to assume now
that his memory and his decision ability are not put
completely out of action but that he is still capable of
thoughts of the following kind: “I think I just have
gone to the left, I should go to the right now again, should
I? Oh, I don't care...” In other words: If he has
just gone to the left or to the right, the probability should
get smaller that he goes to the left or to the right,
respectively, in the next step again.

If we want to simulate such a random walk, we need a random
variable for the staggering directions “left” and
“right”, obviously, whose probabilities allow to be
adapted dynamically. The LEDA class
`dynamic_random_variate`

accomplishes exactly
this.

We imagine in the simulation that we stand on the real
axis. We start at 0. We walk to the left or to the right with
equal probability in the first step. Every time we have walked in
one of the two directions, we double the weight of the
other direction. (This does
*not* mean that the corresponding
probability is doubled!) We make one million steps
altogether. (Hopefully, an actual way home is not so long.) We
count in an array how often we stood at which number and
output the counter readings at the end.

#include <LEDA/random_variate.h> #include <LEDA/array.h> #include <iostream> using leda::dynamic_random_variate; using leda::array; enum direction {LEFT, RIGHT}; array<int> weight(LEFT, RIGHT); inline void double_weight(dynamic_random_variate&, direction); int main() { weight[LEFT] = weight[RIGHT] = 1; dynamic_random_variate DRV(weight); array<int> counter(-1,1); counter[0] = 1; // Start at position 0 counter[-1] = counter[1] = 0; const int num_walks = 1000000; int cur_pos = 0; // Do random walk for(int i = 0; i < num_walks; i++) { int dir = DRV.generate(); if(dir == LEFT) { // Walk to the left cur_pos--; if(cur_pos < counter.low()) counter.resize(counter.low() * 2, counter.high()); counter[cur_pos]++; double_weight(DRV, RIGHT); } else { // Walk to the right cur_pos++; if(cur_pos > counter.high()) counter.resize(counter.low(), counter.high() * 2); counter[cur_pos]++; double_weight(DRV, LEFT); } } // Find boundaries of walk int lbound = counter.low(); while(counter[lbound] == 0) lbound++; int ubound = counter.high(); while(counter[ubound] == 0) ubound--; // Print statistics for(int i = lbound; i <= ubound; i++) { std::cout.width(2); std::cout << i << ": " << counter[i] << "\n"; } } inline void double_weight(dynamic_random_variate& DRV, direction dir) { int otherdir = 1 - dir; if(weight[otherdir] > 1) { weight[otherdir] /= 2; DRV.set_weight(otherdir, weight[otherdir]); } else { weight[dir] *= 2; DRV.set_weight(dir, weight[dir]); } }

An example run created the following output:

-6: 2 -5: 147 -4: 2499 -3: 21527 -2: 95324 -1: 228021 0: 304723 1: 228780 2: 94966 3: 21386 4: 2482 5: 139 6: 5

This points out that we always sway around 0. The further we have deviated from the middle, the more improbable it will be that we will deviate even further.

Like the class `random_variate`

,
the definition of an object of class
`dynamic_random_variate`

requires an
array that holds the weights of the random values to be
created:

dynamic_random_variate DRV(weight);

As opposed to the former class, however, the individual weights can be changed at runtime by a call of the method

DRV.set_weight(i, w);

Here `i`

refers to the corresponding index of the
weight array and `w`

is the new weight of the i-th
random value.

The above program is an example of a reasonable use of
the method `resize()`

of LEDA arrays and,
at the same time, for LEDA arrays having negative indices. The
original array bounds are -1 and 1. If we trespass these
bounds, we double the array bound in the respective
direction. This duplication strategy gives us enough
“breath” until the next duplication, that is, it
has constant *amortized* time
costs.

In the duplication of the weights in the function
`double_weight()`

we make use of a trick:
We cannot simply double the corresponding weight every time
because otherwise we would get beyond the definition domain of
the type `int`

very soon (depending on the machine
architecture after 31 or 63 duplications). Instead, we make
the observation that with only two possible random values, the
relative weight of a value also can be doubled by halving the
weight of the other one. So if the respective other weight is
larger than 1, we halve it, otherwise we double it. Again
it should be noted that the weights of the classes
`random_variate`

and
`dynamic_random_variate`

must always be
*integral*!

More about the class
`dynamic_random_variate`

can be found
on the corresponding manual
page.