Abstract: The penalized least squares method with some appropriately defined penalty is widely used for simultaneous variable selection and coefficient estimation in linear regression. However, the efficiency of least squares (LS) based methods is adversely affected by outlying observations and heavy tailed distributions. On the other hand, the least absolute deviation (LAD) estimator is more robust, but may be inefficient for many distributions of interest. To overcome these issues, we propose a novel method termed the regularized rank regression () estimator. It is shown that the proposed estimator is highly efficient across a wide spectrum of error distributions. We show further that when the adaptive LASSO penalty is used, the estimator can be made consistent in variable selection. We propose using a score statistic-based information criterion for choosing the tuning parameters, which bypasses density estimation. Simulations and data analysis both show that the proposed method performs well in finite sample cases.
Key words and phrases: Composite quantile regression, lars, lasso, rank regression, viariable selection.