Abstract: Given a general statistic Tn(X,θ)=Tn(X1,...,Xn,θ) , a representation is given for the difference between the bootstrapped statistic and a replica of its own image . Except for a high order error term, the difference, which explains the validity of the bootstrap method, consists of 3 components. The first component is the difference , where is used in as the bootstrap resampling base. The other two components depend on the model Fθ() and the statistic Tn only, and they appear in the form of an inner product and behave like a derivative of Tn with respect to θ. This representation is an application of the classical mean value theorem and it supports the superiority of the maximum likelihood summary as explored by Efron (1982b).
Key words and phrases: Bootstrap, bootstrap representation, estimation equation, maximum likelihood summary, mean value theorem, parametric bootstrap, root statistic.