Back To Index Previous Article Next Article Full Text

Statistica Sinica 29 (2019), 353-369

CONVEX SURROGATE MINIMIZATION IN
CLASSIFICATION
Cui Xiong1 , Jun Shao1,2 and Lei Wang3
1 East China Normal University, 2 University of Wisconsin-Madison
and 3 Nankai University

Abstract: Convex optimization is an increasingly important theme in applications. We consider the construction of a binary classification rule by minimizing the risk based on a convex loss as a surrogate to the 0-1 loss. Compared with the approach of directly estimating the conditional probability of the binary class label given a vector of covariates, our proposed convex surrogate minimization approach is computationally simpler and more efficient. We begin with a discussion of what type of convex surrogate is valid. When the conditional probability model for class label is parametric, we show that our proposed approach is either equivalent to the traditional maximum likelihood method or a substitute for computational saving. When the conditional probability model is semiparametric, we show how to apply convex surrogate minimization in conjuncture with kernel weighting, which results in an asymptotically valid classification rule. Some convergence rates are established and empirical simulation results are presented.

Key words and phrases: Binary classification, convex optimization, kernel weighting, 0-1 loss.

Back To Index Previous Article Next Article Full Text