Back To Index Previous Article Next Article Full Text


Statistica Sinica 15(2005), 359-380





CALIBRATING BAYES FACTOR UNDER PRIOR

PREDICTIVE DISTRIBUTIONS


Gonzalo García-Donato and Ming-Hui Chen


Universidad de Castilla-La Mancha and University of Connecticut


Abstract: The Bayes factor is a popular criterion in Bayesian model selection. Due to the lack of symmetry of the prior predictive distribution of Bayes factor across models, the scale of evidence in favor of one model against another constructed based solely on the observed value of the Bayes factor is thus inappropriate. To overcome this problem, a novel calibrating value of the Bayes factor based on the prior predictive distributions and the decision rule based on this calibrating value for selecting the model are proposed. We further show that the proposed decision rule based on the calibration distribution is equivalent to the surprise-based decision. That is, we choose the model for which the observed Bayes factor is less surprising. Moreover, we demonstrate that the decision rule based on the calibrating value is closely related to the classical rejection region for a standard hypothesis testing problem. An efficient Monte Carlo method is proposed for computing the calibrating value. In addition, we carefully examine the robustness of the decision rule based on the calibration distribution to the choice of imprecise priors under both nested and non-nested models. A data set is used to further illustrate the proposed methodology and several important extensions are also discussed.



Key words and phrases: Calibrating value, critical value, hypothesis testing, imprecise prior, L measure, model selection, Monte Carlo, posterior model probability, pseudo-Bayes factor, P-value.



Back To Index Previous Article Next Article Full Text