Abstract: In a parametric statistical model, a function of the data is said to be ancillary if its distribution does not depend on the parameters in the model. The concept of ancillary statistics is one of R. A. Fisher's fundamental contributions to statistical inference. Fisher motivated the principle of conditioning on ancillary statistics by an argument based on relevant subsets, and by a closely related argument on recovery of information. Conditioning can also be used to reduce the dimension of the data to that of the parameter of interest, and conditioning on ancillary statistics ensures that no information about the parameter is lost in this reduction.
The present review article is an attempt to illustrate various aspects of the use of ancillarity in statistical inference. Both exact and asymptotic theory are considered. Without any claim of completeness, we have made a modest attempt to crystalize many of the basic ideas in the literature.
Key words and phrases: Ancillarity paradox, approximate ancillary, estimating functions, hierarchical Bayes, local ancillarity, location, location-scale, multiple ancillaries, nuisance parameters, P-ancillarity, p-values, S-ancillarity, saddlepoint approximation.