Statistics How To

Ancillary Statistic: Simple Definition and Example

Sufficiency >

The term “ancillary statistic” is one of those terms that mean something slightly different depending on where you read about it.

Most (but not all) authors agree that an ancillary statistic is a distribution constant statistic that can be combined with a maximum likelihood estimator to create a minimally sufficient statistic. The ancillary statistic in this sense is defined as one part of a sufficient statistic that has a parameter free marginal distribution.

Some authors will describe an ancillary statistic as simply a statistic whose distribution doesn’t depend on the model parameters. In this context, an ancillary statistic is basically a summary of the data. Standing alone, it doesn’t give any information about the parameter P. For example, an ancillary statistic could be an estimator for a random sample size, but it isn’t an estimator for any specific sample size (Kardaun, 2006).

As a Complement to Sufficient Statistics

Given the above definitions, it should be easy to see why ancillaries are sometimes referred to as the complement of a sufficient statistic. While a sufficient statistic contains all of the parameter’s information, an ancillary contains no information about the model parameter. Basu’s Theroem (as cited in Gosh, 2011) summarizes this idea:

If U is a complete statistic and a sufficient statistic for a parameter(θ), and if V is an ancillary statistic for θ, then U and V are independent.

Ancillary statistics are used in compound distributions and unconditional inference.

Specific Types of Ancillary Statistic

  • First order ancillary: an ancillary is first order if the statistic’s expected value is independent of the population.
  • Trivial ancillary: Defined as the constant statistic V(X) ≡ c ∈ ℝ (Shao, 2008).

Advantages

If an ancillary contains no information about population parameters, why use it at all? Ning-Zhing & Jian (2008) state two reasons why you might choose ancillary over sufficiency:

  1. Invariance to the parameter. Invariant statistics are not easily changed by transformations, like simple data shifts.
  2. Independence to sufficient statistics.

References

Ghosh M. (2011) Basu’s Theorem. In: DasGupta A. (eds) Selected Works of Debabrata Basu. Selected Works in Probability and Statistics. Springer, New York, NY
Kardaun (2006). Classical Methods of Statistics: With Applications in Fusion-Oriented Plasma Physics. Springer Science & Business Media.
Ning-Zhing, S. & Jian, T. (2008). Statistical Hypothesis Testing: Theory and Methods. World Scientific.
Shao, J. Mathematical Statistics. Springer Science & Business Media.

------------------------------------------------------------------------------

Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. If you'd rather get 1:1 study help, Chegg Tutors offers 30 minutes of free tutoring to new users, so you can try them out before committing to a subscription.

If you prefer an online interactive environment to learn R and statistics, this free R Tutorial by Datacamp is a great way to get started. If you're are somewhat comfortable with R and are interested in going deeper into Statistics, try this Statistics with R track.

Comments? Need to post a correction? Please post a comment on our Facebook page.

Check out our updated Privacy policy and Cookie Policy

Ancillary Statistic: Simple Definition and Example was last modified: October 31st, 2017 by Stephanie