Abstract: Abstract: Existing privacy guarantees for convex optimization algorithms do not apply to heavy-tailed data with regularized estimation. This is a notable gap in the differential privacy (DP) literature, given the broad prevalence of non-Gaussian data and penalized optimization problems. In this work, we propose three (ϵ, δ)-DP methods for regularized convex optimization and derive bounds on their population excess risks in a framework that accommodates heavy-tailed data with fewer assumptions (relative to previous works). This work is the first to address DP in generic regularized convex optimization problems with heavy-tailed responses. Two of our methods augment a basic (ϵ, δ)-DP algorithm with robust procedures for privately estimating minibatch gradients. Our numerical analyses highlight the performance of our methods relative to data dimensionality, batch size, and privacy budget, and suggest settings where each approach is favorable.
Key words and phrases: Error bound, non-smooth regularization, privacy protection, randomized mechanism.