Back To Index Previous Article Next Article Full Text


Statistica Sinica 11(2001), 419-443



CRITERION-BASED METHODS FOR BAYESIAN MODEL

ASSESSMENT


Joseph G. Ibrahim, Ming-Hui Chen and Debajyoti Sinha


Harvard School of Public Health and Dana-Farber Cancer Institute,
Worcester Polytechnic Institute and Medical University of South Carolina


Abstract: We propose a general Bayesian criterion for model assessment. The criterion is constructed from the posterior predictive distribution of the data, and can be written as a sum of two components, one involving the means of the posterior predictive distribution and the other involving the variances. It can be viewed as a Bayesian goodness-of-fit statistic which measures the performance of a model by a combination of how close its predictions are to the observed data and the variability of the predictions. We call this proposed predictive criterion the L measure, it is motivated by earlier work of Ibrahim and Laud (1994) and related to a criterion of Gelfand and Ghosh (1998). We examine the L measure in detail for the class of generalized linear models and survival models with right censored or interval censored data. We also propose a calibration of the L measure, defined as the prior predictive distribution of the difference between the L measures of the candidate model and the criterion minimizing model, and call it the calibration distribution. The calibration distribution will allow us to formally compare two competing models based on their L measure values. We discuss theoretical properties of the calibration distribution in detail, and provide Monte Carlo methods for computing it. For the linear model, we derive an analytic closed form expression for the L measure and the calibration distribution, and also derive a closed form expression for the mean of the calibration distribution. These novel developments will enable us to fully characterize the properties of the L measure for each model under consideration and will facilitate a direct formal comparison between several models, including non-nested models. Informative priors based on historical data and computational techniques are discussed. Several simulated and real datasets are used to demonstrate the proposed methodology.



Key words and phrases: Calibration, model selection, predictive criterion, predictive distribution, variable selection.



Back To Index Previous Article Next Article Full Text