Abstract: Model selection is a key component in any statistical analysis. In this paper we discuss this issue from the point of view of robustness and we point out the extreme sensitivity of many classical model selection procedures to outliers and other departures from the distributional assumptions of the model. First, we focus on regression and review a robust version of Mallows's Cp as well as some related approaches. We then go beyond the regression model and discuss a robust version of the Akaike Information Criterion for general parametric models.
Key words and phrases: Akaike criterion, autoregressive models, competing models, crossvalidation, diagnostics, information theory, Mallows's Cp, M-estimators, non-nested hypotheses, outliers, robust Akaike criterion, robust Cp, robust regression, robust tests, Schwartz criterion, time series, variable selection, weighted prediction error.