Loading [MathJax]/jax/output/HTML-CSS/jax.js

You are here

A Selection of Problems from A.A. Markov’s Calculus of Probabilities: Problem 8 – Solution, Part 2

Author(s): 
Alan Levine (Franklin and Marshal College)

 

Review statement of Problem 8.

 

For large values of n, the exact calculation of Pα requires tiring computations and can hardly represent great interest.

Then the question arises of finding an approximate expression for the probability with the possibility that it is simple and close to exact.

Assuming only n is large, but m is not, and considering not the probability of the actual value of the sum X1+X2++Xn but the probability that this sum lies within given limits, we can return to the general approximate calculations in Chapter III.

To apply these, we should find the mathematical expectation of the first and second powers of the considered variables.

Since the mathematical expectation of any of these variables X1,X2,Xn is equal to 1+2++mm=m+12, and the mathematical expectation of their squares is equal to 12+22++m2m=(m+1)(2m+1)6, then the difference between the mathematical expectations of their squares and the square of their mathematical expectations19 reduces to (m+1)(2m+1)6(m+12)2=m2112, and therefore, the results of the third chapter give, for the probability of the inequalities nm+12τnm216<X1+X2++Xn<nm+12+τnm216, an approximate expression in the form of the known integral20 2πτ0ez2dz.

We will take advantage of the particular example to show other results of this approximate expression of probability, which can be applied in the general case.

And before anything, we note that, in the expansion of any entire function F(t) in powers of t, the coefficient of tα can be represented in the form of the integral 12πππF(eϕ1)eαϕ1dϕ, because ππdϕ=2π, and, for any integer k different from 0, we have ππekϕ1dϕ=0.21

Therefore,22 Pα=12πππe(nα)ϕ1(1emϕ1)nmn(1eϕ1)ndϕ=12πππe(nm+12α)ϕ1(em2ϕ1em2ϕ1)nmn(e12ϕ1e12ϕ1)ndϕ=12πππe(nm+12α)ϕ1(sinm2ϕmsinϕ2)ndϕ=1ππ0cos(nm+12α)ϕ (sinm2ϕmsinϕ2)ndϕ.

 

Continue to the third part of Markov's solution of Problem 8.

Skip to Markov's analysis of repeated independent events.

 


[19] In other words, the variance.

[20] Markov never defines the normal distribution, as we know it today. When he talks about continuous random variables for the first time in Chapter V, his only examples are those which have a constant density function (i.e., what we would now call the “uniform” distribution) and one whose density is defined for all x and decreases as |x| increases. He then claims that the density is proportional to ex2, although there are many other such possibilities. This gives us a “normal distribution” with mean 0 and variance 12.  So, his approximation can be written as P(|Snμ2σ|<τ)=2πτ0ez2dz.

[21] This is a Fourier transform. Markov chose not to use the symbol i to represent 1, although that notation had been around for at least a century.

[22] He is using the facts that eix2eix2=2isin(x2) and cos(x)=Re(eix).

 

Alan Levine (Franklin and Marshal College), "A Selection of Problems from A.A. Markov’s Calculus of Probabilities: Problem 8 – Solution, Part 2," Convergence (November 2023)

A Selection of Problems from A.A. Markov’s Calculus of Probabilities