Review statement of Problem 4.
Solution 2: Noting that the game must end in no more than \(l+m-1\) rounds, we assume that a player does not stop immediately upon achieving one of the appropriate numbers of winning rounds but continues to play until exactly \(l+m-1\) rounds are played.
Under these assumptions, the probability that player \(L\) wins the game is equal to the probability of winning, by that same player \(L\), no fewer than \(l\) rounds out of all \(l+m-1\) rounds.
In this case, if the game is won by player \(L\), then the number of rounds won by him reaches the value \(l\) and the subsequent rounds can only increase this number, or leave it unchanged.
And conversely, if player \(L\) wins no fewer than \(l\) rounds out of \(l+m-1\) rounds, then the number of rounds won by player \(M\) will be less than \(m\); hence, in that case, it follows that player \(L\) wins \(l\) rounds before player \(M\) manages to win \(m\) rounds and, in this way, the game will be won by player \(L\).
On the other hand, the probability that player \(L\) wins no fewer than \(l\) rounds out of \(l+m-1\) rounds coincides with the probability that, in \(l+m-1\) independent experiments, an event whose probability for each experiment is \(p\) appears no fewer than \(l\) times.
The previous probability is expressed by the known sum of products \[\frac{1\cdot 2\cdot \cdots\cdot (l+m-1)}{1\cdot 2\cdot \cdots\cdot (l+i) \cdot 1\cdot 2\cdot \cdots\cdot (m-i+1)} p^{l+i} q^{m-i+1}\]
where \(i= 0,1,2,\dots,m-1\).
Then \[(L) = \frac{(l+m-1)\cdot\cdots\cdot m}{1\cdot2 \cdot\cdots\cdot l} p^l q^{m-1} \left\lbrace 1 + \frac{m-1}{l+1} \frac{p}{q} + \frac{m-1}{l+1}\frac{m-2}{l+2}\frac{p^2}{q^2} + \cdots \right\rbrace;\] also, we find \[(M) = \frac{(l+m-1)\cdot\cdots\cdot l}{1\cdot2 \cdot\cdots\cdot m} p^{l-1} q^{m} \left\lbrace 1 + \frac{l-1}{m+1}\frac{q}{p} + \frac{l-1}{m+1} \frac{l-2}{m+2} \frac{q^2}{p^2} + \cdots \right\rbrace.\]
It is not hard to see that these new expressions for \((L)\) and \((M)\) are equal to those found previously.16
Continue to Markov's numerical example for Problem 4.
Skip to statement of Problem 8.
[16] Note that the second solution demonstrates a nice connection between the binomial and negative binomial distributions. This argument is essentially the following theorem: If \(Y\) has a binomial distribution with parameters \(n\) and \(p\), and \(T\) has a negative binomial distribution with parameters \(r\) and \(p\), where \(n\geq r\), then \(P(Y \geq r) = P(T \leq n)\). In other words, if it takes no more than \(n\) trials to get \(r\) successes (\(T \leq n\)), then there must have been at least \(r\) successes in the first \(n\) trials (\(Y \geq r\)).