Back To Index Previous Article Next Article Full Text

Statistica Sinica 23 (2013), 809-828



C. Dossal$^{1}$, M. Kachour$^{2}$, M.J. Fadili$^{2}$, G. Peyré$^{3}$ and C. Chesneau$^{4}$

$^1$CNRS-Univ. Bordeaux 1, $^2$CNRS-ENSICAEN-Univ. Caen,
$^3$CNRS-Univ. Paris-Dauphine and $^4$CNRS-Univ. Caen

Abstract: In this paper, we investigate and give a closed-form expression of the degrees of freedom ($\mathrm{dof}$) of penalized $\ell_1$ minimization (also known as the Lasso) for linear regression models. Namely, we show that for any given Lasso regularization parameter $\lambda$ and any observed data $y$ belonging to a set of full (Lebesgue) measure, the cardinality of the support of a particular solution of the Lasso problem is an unbiased estimator of the degrees of freedom. This is achieved without the need of uniqueness of the Lasso solution. Thus, our result holds true for both the underdetermined and the overdetermined case; the latter was originally studied in Zou, Hastie, and Tibshirani (2007). We also show, by providing a simple counterexample, that although the $\mathrm{dof}$ theorem of Zou, Hastie, and Tibshirani (2007) is correct, their proof contains a flaw since their divergence formula holds on a different set of a full measure than the one that they claim. An effective estimator of the number of degrees of freedom may have several applications including an objectively guided choice of the regularization parameter in the Lasso through the ${\rm SURE}$ framework. Our theoretical findings are illustrated through several numerical simulations.

Key words and phrases: Degrees of freedom, Lasso, model selection criteria, ${\rm SURE}$.

Back To Index Previous Article Next Article Full Text