Statistics Definitions > Fiducual Inference / Fiducial distribution
What is fiducial inference?
Fiducial inference [1] produces a fiducial distribution for parameters without having to specify a prior probability distribution. The technique was acrimoniously debated in the statistical community for many years, practically disappeared after Fisher’s death, and resulted in confusion as to what the mysterious fiducial argument even is. It is perhaps best known as Fisher’s great failure [2].
“[Fiducial inference] is an attempt to make the Bayesian omelette without breaking the Bayesian eggs.” ~ Savage [4].
Fisher developed Fiducial inference as an objective, inferential alternative to “subjective” Bayesian reasoning. The first iteration of the method was barely indistinguishable from Neyman’s (1935) unconditional confidence interval approach. Over the next two decades, Fisher worked to clarify his reasoning, but his shift in thinking led to more confusion. Ultimately, Fisher concluded that his new intuitions about the method were fundamentally incorrect. In 1963, Buehler & Fedderson showed that one of Fisher’s crucial arguments was false.
Problems with fiducial inference
Fiducial inference It isn’t widely used, due to a number of conceptual and operational difficulties. For example, “…its unrestricted use produces contradictions” (Sprott, 2000, p.77). Other problems include the fact that discrete data and continuous parameters play interchanging roles, which means it can’t be used at all for discrete data. A fiducial distribution in statistics is, loosely speaking, another name for a confidence distribution. The term originated with Fisher [1], who gave a general method for computing real-valued confidence limits before the formal concept of a confidence interval existed [2].
Fisher’s fiducial distribution
Fisher’s idea for the fiducial distribution is to assume F(x, θ) is a parametric cumulative distribution. In addition, assume a “pivotal variable” μ follows a uniform distribution U(0, 1), so that
μ = F (x, θ).
If each value x, F( x, θ). is monotonic in θ, this equation will have a unique solution
θ = θ (μ, x)
for each μ ∈ (0, 1).
Fisher defined the fiducial distribution of θ (assuming no prior probabilities) as the distribution of θ, implied by θ = θ (μ, x), when x is fixed and μ is uniformly distributed.
The idea sounds relatively simple, but Fisher’s work on fiducial inference (which derives the fiducial distribution) generated intense debate in the literature. Some aspects of Fisher’s approach failed, such as applying the properties of the distribution for multi-parameter problems. In addition, Fisher’s description of pivotal variables was seen by many as confusing and restrictive [3]. This may be the reason Fisher’s work disappeared into the annals of history, earning the moniker Fisher’s great failure [4].
“[Fiducial inference] is an attempt to make the Bayesian omelette without breaking the Bayesian eggs.” ~ Savage (1962) Savage [5]
That said, some authors have revived Fisher’s work in recent years under the label of generalized inference, a tool for deriving statistical procedures when frequentist methods are inadequate [6]. The main idea of generalized inference is to transfer randomness from data to the parameter space using an inverse of a data-generating equation without using Bayes’ theorem. The resulting distribution can then be used for inference [7].
References
[1] Fisher, R. A. (1930). “Inverse probability. Proc. Camb. phil. Soc. 26, 528– 35.
[2] Zabell, S. R. A. Fisher and the Fiducial Argument. Statistical Science. Vol 7, No. 3, 369-387.
[3] Yager, R. (2008). Classic Works of the Dempster-Shafer Theory of Belief Functions. Springer.
[4] Savage, L. (1962). Discussion of Birnbaum, A, On the foundations of statistical inference (with discussion). J/ Amer. Statist. Assoc. 57 269-306.
[5] Hannig, J. On Generalized Fiducial Inference. Retrieved May 19, 2024 from: https://hannig.cloudapps.unc.edu/publications/Hannig2009.pdf
[7] Jan Hannig, Hari Iyer, Randy C. S. Lai & Thomas C. M. Lee (2016) Generalized Fiducial Inference: A Review and New Results, Journal of the American Statistical Association, 111:515, 1346-1361, DOI: 10.1080/01621459.2016.1165102