Abstract: It is well-know that for heavy-tailed distributions the bootstrap can lead to inconsistent estimation of the distribution of the sample mean; and that this difficulty may be overcome by using the so-called ``subsample bootstrap", where the size of a bootstrap resample is an order of magnitude smaller than that of the sample. Naturally, one might ask whether, as in classical problems, the bootstrap applied to heavy-tailed distributions produces more accurate approximations to the distribution of the sample mean than do asymptotic methods. We show that, generally speaking, it does not. In an important class of problems, the subsample bootstrap performs more poorly than asymptotic methods, even if the subsample size is chosen optimally. A technique related to Richardson extrapolation, effectively a cross between the subsample bootstrap and asymptotic methods, performs better than either approach in some, but not all, circumstances.
Key words and phrases: Asymptotic approximation, bootstrap, central limit theorem, convergence rate, domain of attraction, Edgeworth expansion, mean, percentile, pivot, Richardson extrapolation, stable law.