Ancillary Statistic: Simple Definition and Example

Sufficiency >

The term “ancillary statistic” is one of those terms that mean something slightly different depending on where you read about it.

Most (but not all) authors agree that an ancillary statistic is a distribution constant statistic that can be combined with a maximum likelihood estimator to create a minimally sufficient statistic. The ancillary statistic in this sense is defined as one part of a sufficient statistic that has a parameter free marginal distribution.

Some authors will describe an ancillary statistic as simply a statistic whose distribution doesn’t depend on the model parameters. In this context, an ancillary statistic is basically a summary of the data. Standing alone, it doesn’t give any information about the parameter P. For example, an ancillary statistic could be an estimator for a random sample size, but it isn’t an estimator for any specific sample size (Kardaun, 2006).

As a Complement to Sufficient Statistics

Given the above definitions, it should be easy to see why ancillaries are sometimes referred to as the complement of a sufficient statistic. While a sufficient statistic contains all of the parameter’s information, an ancillary contains no information about the model parameter. Basu’s Theorem (as cited in Gosh, 2011) summarizes this idea:

If U is a complete statistic and a sufficient statistic for a parameter(θ), and if V is an ancillary statistic for θ, then U and V are independent.

Ancillary statistics are used in compound distributions and unconditional inference.

Specific Types of Ancillary Statistic

  • First order ancillary: an ancillary is first order if the statistic’s expected value is independent of the population.
  • Trivial ancillary: Defined as the constant statistic V(X) ≡ c ∈ ℝ (Shao, 2008).

Advantages

If an ancillary contains no information about population parameters, why use it at all? Ning-Zhing & Jian (2008) state two reasons why you might choose ancillary over sufficiency:

  1. Invariance to the parameter. Invariant statistics are not easily changed by transformations, like simple data shifts.
  2. Independence to sufficient statistics.

References

Ghosh M. (2011) Basu’s Theorem. In: DasGupta A. (eds) Selected Works of Debabrata Basu. Selected Works in Probability and Statistics. Springer, New York, NY
Kardaun (2006). Classical Methods of Statistics: With Applications in Fusion-Oriented Plasma Physics. Springer Science & Business Media.
Ning-Zhong, S. & Jian, T. (2008). Statistical Hypothesis Testing: Theory and Methods. World Scientific.
Shao, J. Mathematical Statistics. Springer Science & Business Media.


Comments? Need to post a correction? Please Contact Us.