Review statement of Problem 4.
Solution 1: First of all, we note that the game can be won by player \(L\) in various numbers of rounds not less than \(l\) and not more than \(l + m - 1\).
Therefore, by the Theorem of Addition of Probabilities,13 we can represent the desired probability \((L)\) in the form of a sum \((L)_l + (L)_{l+1} + \cdots + (L)_{l+i} + \cdots + (L)_{l+m-1}\), where \((L)_{l+i}\) denotes the total probability that the game is finished in \(l+i\) rounds won by player \(L\).
And in order for the game to be won by player \(L\) in \(l+i\) rounds, that player must win the \((l+i)\)th round and must win exactly \(l-1\) of the previous \(l+i-1\) rounds.
Hence, by the Theorem of Multiplication of Probabilities,14 the value of \((L)_{l+i}\) must be equal to the product of the probability that player \(L\) wins the \((l+i)\)th round and the probability that player L wins exactly \(l-1\) out of \(l+i-1\) rounds.
The last probability, of course, coincides with the probability that in \(l+i-1\) independent experiments, an event whose probability for each experiment is \(p\), will appear exactly \(l-1\) times.
The probability that player \(L\) wins the \((l+i)\)th round is equal to \(p\), as is the probability of winning any round.
Then15 \[(L)_{l+i} = p \frac{1\cdot 2\cdot \cdots\cdot (l+i-1)}{1\cdot 2\cdot \cdots\cdot i \cdot 1\cdot 2\cdot \cdots\cdot (l-1)} p^{l-1} q^i = \frac{l (l+1)\cdot \cdots\cdot (l+i-1)}{1\cdot 2\cdot \cdots\cdot i} p^l q^i,\] and finally, \[(L) = p^l\left\lbrace 1 + \frac{l}{1}q + \frac{l(l+1)}{1\cdot 2}q^2 + \cdots + \frac{l(l+1)\cdots(l+m-2)}{1\cdot 2\cdot \cdots\cdot(m-1)} q^{m-1}\right\rbrace.\]
In a similar way, we find \[(M) = q^m\left\lbrace 1 + \frac{m}{1}p + \frac{m(m+1)}{1\cdot 2}p^2 + \cdots + \frac{m(m+1)\cdots(m+l-2)}{1\cdot 2\cdot \cdots\cdot(l-1)} p^{l-1}\right\rbrace.\]
However, it is sufficient to calculate one of these quantities, since the sum \((L)+ (M)\) must reduce to 1.
Continue to Markov's second solution of Problem 4.
Skip to Markov's numerical example for Problem 4.
Skip to statement of Problem 8.
[13] This “theorem,” presented in Chapter I, says (in modern notation): If \(A\) and \(B\) are disjoint, then \(P(A \cup B) = P(A) + P(B)\). We would now consider this an axiom of probability theory. Since Markov considered only experiments with finite, equiprobable sample spaces, he could “prove” this by a simple counting argument.
[14] This “theorem,” also presented in Chapter I, says (in modern notation): \(P(A \cap B) = P(A |B) \cdot P(B)\). Nowadays, we would consider this as a definition of conditional probability, a term Markov never used. Again, he “proved” it using a simple counting argument. He then defined the concept of independent events.
[15] The conclusion \((L)_{l+i} = \binom{l+i-1}{i}p^l q^i\) is a variation of the negative binomial distribution; namely, if \(X\) represents the number of Bernoulli trials needed to attain \(r\) successes, then \(P(X=n) = \binom{n-1}{r-1}p^r q^{n-r},\ \ n\geq r\), where \(p\) is the probability of success on each trial and \(q=1-p\).