Statistics How To

Sufficient Statistic: Simple Definition, Example

Estimators >

What is a Sufficient Statistic?

sufficient statisticA sufficient statistic is a statistic that summarizes all of the information in a sample about a chosen parameter. For example, the sample mean, x̄, estimates the population mean, μ. x̄ is a sufficient statistic if it retains all of the information about the population mean that was contained in the original data points.

According to statistician Ronald Fisher,

“…no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter.”

In layman’s terms, a sufficient statistic is your best bet for summarizing your data; You can use it even if you don’t know any of the actual values in the sample.

Sufficient Statistic Example

You can think of a sufficient statistic as an estimator that allows you to estimate the population parameter as well as if you knew all of the data in all possible samples.

For example, let’s say you have the simple data set 1,2,3,4,5. You would calculate the sample mean as (1 + 2 + 3 + 4 + 5) / 5 = 3, which gives you the estimate of the population mean as 3. Let’s assume you don’t know those values (1, 2, 3, 4, 5), but you only know that the sample mean is 3. You would also estimate the population mean as 3, which would be just as good as knowing the whole data set. The sample mean of 3 is a sufficient statistic. To put this another way, if you have the sample mean, then knowing all of the data items makes no difference in how good your estimate is: it’s already “the best”.

Order statistics for iid samples are also sufficient statistics. This does not hold for data that isn’t iid because only in these samples, can you re-order the data without losing meaning.

Formal Definition of Sufficient Statistics

More formally, a statistic Y is said to be a sufficient estimator for some parameter θ if the conditional distribution of Y: T(X1, X2,…,Xn) doesn’t depend on θ. While this definition is fairly simple, actually finding the conditional distribution is the tough part. In fact, most statisticians consider it extremely difficult. One, slightly easier, way to find the conditional distribution is to use the Factorization Theorem.

Factorization Theorem

Suppose that you have a random sample X = (X1,…, Xn) from some function f(x|θ) and that f(x|θ) is the joint pdf of X. A statistic is sufficient if you can write the following joint pdf for functions g(t|θ) and h(x):

f(x|θ) = g(T(X)|θ)h(x)

Where:

  • θ is the unknown parameter belonging to the parameter space Q,
  • and the pdf exists for all values of x, and θ ∈ Q.

Complement of Sufficiency

An ancillary statistic is the complement of sufficiency. While sufficient statistics give you all of the information about a parameter, ancillary statistics give you no information.

Reference:
Fisher, R.A. (1922). “On the mathematical foundations of theoretical statistics”. Philosophical Transactions of the Royal Society A 222: 309–368.
Ross, S. (2010). Introduction to Probability Models. 11th Edition. Elsevier.

------------------------------------------------------------------------------

If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.

Comments are now closed for this post. Need help or want to post a correction? Please post a comment on our Facebook page and I'll do my best to help!
Sufficient Statistic: Simple Definition, Example was last modified: October 12th, 2017 by Stephanie Glen