A Limiting Distribution (also called an asymptotic distribution) is the hypothetical distribution — or convergence — of a sequence of distributions. As it is hypothetical, it isn’t a distribution in the general sense of the word. The asymptotic distribution theory attempts to find a limiting distribution to a series of distributions.
Some of these distributions are well-known. For example, the sampling distribution of the t-statistic will converge to a standard normal distribution if the sample size is large enough. Sometimes the limiting probability distribution can be found by studying the behavior of CDFs or PDFs. Theorems like Slutsky’s can also be used to explore convergence in probability distributions.
Why do we need to find hypothetical distributions?
In basic statistics, the process is to take a random sample of observations and fit that data to a known distribution like the normal distribution or t distribution. When you fit data to a model, it isn’t an exact science. Fitting data exactly to a known distribution is usually very difficult in real life due to limited sample sizes, resulting in a “best guess” based on what you know (or what your software knows) about behavior of large sample statistics. The limiting/asymptotic distribution can be used on small, finite samples to approximate the true distribution of a random variable–one that you would find if the sample size was large enough.
Limiting probability distributions are important when it comes to finding appropriate sample sizes. When a sample size is large enough, then a statistic’s distribution will form a limiting distribution (assuming such a distribution exists).
Limiting Distribution and the CLT
The Central Limit Theorem (CLT) uses the limit concept to describe the behavior of sample means.
The CLT tells us that the sampling distribution of the sampling means approaches a normal distribution as the sample size increases — no matter what the shape of the population distribution. What this is saying is, if you take more samples (especially large ones) the graph of the sample means will look more and more like a normal distribution, even if your graph is skewed or otherwise non-normal. In other words, the Limiting Distribution for a large set of sample means is the Normal Distribution.
Epps (2013) gives the term a more formal framework:
Suppose Xn is a random sequence with cdf Fn(Xn and X is a random variable with cdf F(x).
If Fn converges to F as n > ∞ (for all points where F(x) is continuous), then the distribution of xn converges to x. This distribution is called the limiting distribution of xn.
In simpler terms, we can say that the limiting probability distribution of Xn is the limiting distribution of some function of Xn.
Limiting Distribution and Markov Chains
One of the more common areas where the term Limiting Distribution shows up is in the study of Markov Chains. As time n > ∞, a Markov chain has a limiting distribution π = (πj)j∈s. A value for π can be found by solving a series of linear equations.
Epps, T. (2013). Probability and Statistical Theory for Applied Researchers.
Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. If you'd rather get 1:1 study help, Chegg Tutors offers 30 minutes of free tutoring to new users, so you can try them out before committing to a subscription.
If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.
Comments? Need to post a correction? Please post a comment on our Facebook page.