A very belated Happy Pi Day to all! Traditionally, this day is celebrated by finding clever and whimsical ways of approximating $\pi$, from using series' such as
$$ \frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \dots $$
to using out of control cars. One of my favourite employs randomness: take a unit square, and mark off a quarter-circle inside it with unit radius. Now, select a bunch of points uniformly randomly from the square. As you keep sampling more and more points, the fraction of points inside the quarter-circle will approach $\pi / 4$.
The ratio of the number of red dots to the total number of dots converges to $\pi / 4$.
Indeed, $\pi / 4$ is precisely the area of the quarter-circle. Why does this work? This is all thanks to a very useful result in probability, known as the Weak Law of Large Numbers. When you generate the $i$th point $\mathbf{x}_i$, consider the indicator function $\mathbb{1}_S(\mathbf{x}_i)$, which gives $1$ when $\mathbf{x}_i$ lies in the quarter-circle $S$, i.e. we have a "hit", and $0$ otherwise. After $n$ points have been marked, the fraction of hits is just
$$ \frac{1}{n}\sum_{i = 1}^n \mathbb{1}_S(\mathbf{x}_i). $$
Now, each point $\mathbf{x}_i$ can be though of as an instance of a random variable $X_i$, drawn uniformly from the unit square. Since $X_i$ are chosen independently and in an identical fashion, so are the random variables $\mathbb{1}_S(X_i)$, whence the Weak Law of Large Numbers tells us that
$$ \frac{1}{n}\sum_{i = 1}^n \mathbb{1}_S(X_i) \overset{p}{\longrightarrow} \mathrm{E}[\mathbb{1}_S(X_1)]. $$
The number on the right is just an integral,
$$ \iint_{(0, 1)\times (0, 1)} \mathbb{1}_S(\mathbf{x}) \:d\mathbf{x} = \iint_S \:d\mathbf{x}.$$
But this is simply the area of $S$, namely $\pi / 4$ (!)
The idea that this sample mean converges to the true mean — subject to a few technical conditions and under the right notion of convergence — is widely applicable. Curiously, if we had chosen some bounded continuous function $f$ in place of the indicator function $\mathbb{1}_S$, the same line of reasoning would suggest that
$$ \frac{1}{n} \sum_{i = 1}^n f(X_i) \overset{p}{\longrightarrow} \int f(\mathbf{x}) \:d\mathbf{x}. $$
This method of approximating integrals by randomly sampling points turns out to be very useful, and is one of the simplest examples of Monte-Carlo Integration.
Using similar concepts, can you tackle the following limit of integrals?
$$ \lim_{n \to \infty} \int_0^1 \cdots \int_0^1 \left(\frac{x_1 + \dots + x_n}{n}\right)^\pi \:dx_1 \dots \:dx_n. $$
As a hint, you may need to invoke results such as the Dominated Convergence Theorem and the Continuous Mapping Theorem to obtain a formal proof.
Comments
Post a Comment