Statistica Sinica 31 (2021), 843-865
Ray Bai and Malay Ghosh
Abstract: We study a high-dimensional Bayesian linear regression model in which the scale parameter follows a general beta prime distribution. Under the assumption of sparsity, we show that an appropriate selection of the hyperparameters in the beta prime prior leads to the (near) minimax posterior contraction rate when p ≫ n. For finite samples, we propose a data-adaptive method for estimating the hyperparameters based on the marginal maximum likelihood (MML). This enables our prior to adapt to both sparse and dense settings and, under our proposed empirical Bayes procedure, the MML estimates are never at risk of collapsing to zero. We derive an efficient Monte Carlo expectation-maximization (EM) and variational EM algorithm for our model, which are available in the R package NormalBetaPrime. Simulations and an analysis of a gene expression data set illustrate our model's self-adaptivity to varying levels of sparsity and signal strengths.
Key words and phrases: Beta prime density, empirical Bayes, high-dimensional data, posterior contraction, scale mixtures of normal distributions.