Statistics Definitions > Asymptotic Normality and the Asymptotic Normal Distribution
Asymptotic Normality
Asymptotic normality is a property of an estimator: an estimator that is asymptotically normal will have an approximately normal distribution as the sample size gets infinitely large.
- Asymptotic refers to how the distribution of an estimator behaves as the sample size gets larger (i.e. tends to infinity).
- Normality refers to becoming similar to the normal distribution as the sample size tends to infinity.
Formal Definition of Asymptotic Normality
An estimate(e.g. the sample mean) has asymptotic normality if it converges on an unknown parameter at a “fast enough” rate, 1 / √(n) [1] Formally, an estimate has asymptotic normality if the following equation holds:
which means that as n → ∞, the distribution of converges in distribution to a normal distribution with mean 0 and variance σ2.
In statistics, we’re usually concerned with estimators. However, sequences and probability distributions can also show asymptotic normality.
For example, a sequence of random variables, dependent on a sample size n is asymptotically normal if two sequences μn and σn exist such that:
limn>∞ P[(Tn – μn) / σn ≤ x] = φ(x)
Where “lim” is the limit (from calculus) [2].
Asymptotic Normality vs CLT
While asymptotic normality and the Central Limit Theorem (CLT) both involve distributions converging to a normal distribution, they are not strictly identical:
- CLT: A theorem stating that the sample mean (or sum) of i.i.d. random variables converges in distribution to a normal distribution as the sample size goes to infinity (under certain conditions, like finite variance).
- Asymptotic normality: A property of any estimator whose distribution converges (weakly) to a normal distribution as n→∞ This includes, but is not limited to, sample means.
The property of asymptotic normality can be established with the CLT.
Asymptotic Normal Distribution
An asymptotic normal distribution is the limiting distribution of a sequence of estimators that have the property of asymptotic normality. In other words, an asymptotic normal distribution is one that converges to a normal distribution as the sample size tends to infinity.
We often study asymptotic normality because estimators (such as the sample mean or sample standard deviation) may not be accurate for small samples—they can be biased, meaning their expected value deviates from the true population parameter. That said, as the sample size increases, the difference between the estimator and the parameter usually becomes small with high probability (though not always exactly zero for finite samples). When that difference converges to zero in probability, we call the estimator consistent. A consistent estimator’s bias often goes to zero as the sample size increases, but it may still be biased for any finite sample size.
Under these circumstances—large sample size and an estimator’s convergence to a true parameter—it’s common to find that the estimator’s distribution becomes approximately normal. This limiting distribution is what we call the asymptotic normal distribution.
References
- Panchenko, D. (2006). Lecture 3 Properties of MLE: consistency, asymptotic normality. Fisher information. Retrieved May 26, 2020 from: https://ocw.mit.edu/courses/mathematics/18-443-statistics-for-applications-fall-2006/lecture-notes/lecture3.pdf
- Kolassa, J. (2014). Asymptotic Normality. DOI: https://doi.org/10.1007/978-3-642-04898-2_125 Le Cam, L. (2000). Asymptotics in Statistics: Some Basic Concepts (Springer Series in Statistics) 2nd Edition. Springer.