Statistica Sinica 34 (2024), 933-954
Abstract: In general, selecting hyperparameters for unsupervised learning problems is challenging, owing to the lack of ground truth for validation. Despite the prevalence of this problem in statistics and machine learning, especially in clustering problems, there are not many methods for tuning these hyperparameters with theoretical guarantees. In this paper, we provide a framework that relies on maximizing a trace criterion connecting a similarity matrix with clustering solutions. This framework has provable guarantees for selecting hyperparameters in a number of distinct models. We consider both the sub-Gaussian mixture model and network models as examples of independently and identically distributed (i.i.d.) data and non-i.i.d. data, respectively. We demonstrate that the same framework can be used to choose the Lagrange multipliers of the penalty terms in semidefinite programming relaxations for community detection and the bandwidth parameter for constructing kernel similarity matrices for spectral clustering. By incorporating a cross-validation procedure, we show that the framework also provides consistent model selection for network models. Using a variety of simulated and real data examples, we show that our framework outperforms other widely used tuning procedures in a broad range of parameter settings.
Key words and phrases: Clustering, hyperparameter tuning, model selection, network models, sub-Gaussian mixtures.