Back To Index Previous Article Next Article Full Text

Statistica Sinica 31 (2021), 473-489

CONVERGENCE RATES OF NONPARAMETRIC PENALIZED REGRESSION
UNDER MISSPECIFIED SMOOTHNESS

Noah Simon and Ali Shojaie

University of Washington

Abstract: We present a general approach for computing the convergence rates of nonparametric penalized regression estimators under misspecified smoothness, where the true regression function lies in the closure, but not the interior, of the space of smooth functions characterized by the penalty. The proposed approach uses an approximating/representative sequence that has a finite (but growing) penalty value. Here, to establish consistency, we balance the rate at which the penalty grows, with the approximation error of the representative sequence. We apply these ideas to the two most commonly used nonparametric penalties: total variation and Sobolev semi-norms. We give an upper bound for the rate at which we can estimate a function that exhibits bounded l + 1th-order total-variation or Sobolev complexity, using a kth-order total-variation or Sobolev penalty (for k > l + 1) respectively. Our bounds have a simple form that depends on k and l. In particular, we show that using total-variation penalties, we will achieve a rate better than n-1/2 for any l ≥ 0 and k ≥ 1. We evaluate the sharpness of our bounds based on total-variation penalties using a simulation. Empirically, for l = 0 our bound appears to be sharp; however for l ≥ 1, there appears to be a small gap between our upper bound and the empirical rate.

Key words and phrases: Misspecification, non-arametric estimation, penalized regression, sobolev, total-variation.

Back To Index Previous Article Next Article Full Text