Back To Index Previous Article Next Article Full Text

Statistica Sinica 17(2007), 1065-1090



F. Comte and M.-L. Taupin

Université Paris Descartes and IUT de Paris $5$, Université d'Orsay

Abstract: We consider a regression model with errors-in-variables. Let $(Y_i, Z_i)$, $i=1, \dots, n$ be $n$ i.i.d. copies of $(Y, Z)$ satisfying $Y=f(X)+\xi$, $Z=X+\sigma\varepsilon$, involving independent and unobserved random variables $X, \xi, \varepsilon$. The density of $\varepsilon$ and the constant noise level $\sigma$ are known while the densities of $X$ and $\xi$ are unknown. Using the observations $(Y_i, Z_i)$, $i=1, \cdots, n$, we propose an estimator $\tilde f$ of the regression function $f$ which is defined as the ratio of two adaptive estimators $-$ an estimator of $\ell=fg$ divided by an estimator of $g$, the density of $X$. Both estimators are obtained by minimization of penalized contrast functions. We prove that the MISE of $\tilde f$ on a compact set is bounded by the sum of the two MISEs of the estimators of $\ell$ and $g$. Rates of convergence are given when $\ell$ and $g$ belong to various smoothness classes and when the error $\varepsilon$ is either ordinary smooth or super smooth. The rate of $\tilde f$ is optimal in a minimax sense in all cases where lower bounds are available.

Key words and phrases: Adaptive estimation, density deconvolution, errors-in-variables, minimax estimation, nonparametric regression, projection estimators.

Back To Index Previous Article Next Article Full Text