Convergence in Mean — Definition, Formula & Examples
Convergence in mean is a way of saying that a sequence of functions gets closer to a limiting function when you measure the distance using an integral of their difference raised to a power. Specifically, the integrated average of the difference shrinks to zero.
A sequence of functions converges in the -th mean (or in ) to a function on a measure space if , where . When , this is called convergence in mean square; when , it is simply called convergence in mean.
Key Formula
Where:
- = The $n$-th function in the sequence
- = The limiting function
- = A real number $\geq 1$ specifying which $L^p$ space is used
- = The measure on the space $X$ (e.g., Lebesgue measure)
- = The domain of integration
How It Works
To check convergence in mean, you compute the norm of the difference and verify it tends to zero. In practice, you evaluate for each and take the limit. This type of convergence is stronger than convergence in probability but weaker than uniform convergence. It is widely used in Fourier analysis, where partial sums of a Fourier series converge in mean to the original function, even when pointwise convergence fails.
Worked Example
Problem: Let on the interval with Lebesgue measure. Determine whether converges in mean (i.e., in ) to .
Step 1: Write the norm of the difference .
Step 2: Evaluate the integral.
Step 3: Take the limit as .
Answer: Since the integral tends to , the sequence converges in mean () to on .
Another Example
Problem: Let on . Does converge in mean square () to ?
Step 1: Compute the norm squared of the difference.
Step 2: Check the limit.
Answer: The norm does not tend to , so does not converge in mean square to , even though pointwise for every .
Visualization
Why It Matters
Convergence in mean is central to courses in real analysis, functional analysis, and probability theory. In signal processing, Fourier series of square-integrable signals converge in mean, guaranteeing that approximations capture nearly all the signal's energy. Probability relies on convergence to establish properties of estimators and to prove limit theorems.
Common Mistakes
Mistake: Assuming pointwise convergence guarantees convergence in mean.
Correction: Pointwise convergence alone does not ensure the integral of goes to zero. You must verify the integral condition directly, or use a theorem like the Dominated Convergence Theorem.
Mistake: Confusing convergence in with convergence in (mean square).
Correction: These are different conditions. On a finite measure space, convergence implies convergence (by the Cauchy–Schwarz inequality), but the reverse is false. Always specify the value of .
