Back To Index Previous Article Next Article Full Text


Statistica Sinica 20 (2010), 101-148





A SELECTIVE OVERVIEW OF VARIABLE SELECTION

IN HIGH DIMENSIONAL FEATURE SPACE


Jianqing Fan and Jinchi Lv


Princeton University and University of Southern California


Abstract: High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods.



Key words and phrases: Dimensionality reduction, folded-concave penalty, high dimensionality, LASSO, model selection, oracle property, penalized least squares, penalized likelihood, SCAD, sure independence screening, sure screening, variable selection.

Back To Index Previous Article Next Article Full Text