The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax
Lower Bounds
- URL: http://arxiv.org/abs/2011.03900v2
- Date: Sun, 6 Dec 2020 00:30:30 GMT
- Title: The Cost of Privacy in Generalized Linear Models: Algorithms and Minimax
Lower Bounds
- Authors: T. Tony Cai, Yichen Wang, Linjun Zhang
- Abstract summary: We show that the proposed algorithms are nearly rate-optimal by characterizing their statistical performance.
The lower bounds are obtained via a novel technique, which is based on Stein's Lemma and generalizes the tracing attack technique for privacy-constrained lower bounds.
- Score: 8.760651633031342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose differentially private algorithms for parameter estimation in both
low-dimensional and high-dimensional sparse generalized linear models (GLMs) by
constructing private versions of projected gradient descent. We show that the
proposed algorithms are nearly rate-optimal by characterizing their statistical
performance and establishing privacy-constrained minimax lower bounds for GLMs.
The lower bounds are obtained via a novel technique, which is based on Stein's
Lemma and generalizes the tracing attack technique for privacy-constrained
lower bounds. This lower bound argument can be of independent interest as it is
applicable to general parametric models. Simulated and real data experiments
are conducted to demonstrate the numerical performance of our algorithms.
Related papers
- Dimension reduction via score ratio matching [0.9012198585960441]
We propose a framework, derived from score-matching, to extend gradient-based dimension reduction to problems where gradients are unavailable.
We show that our approach outperforms standard score-matching for problems with low-dimensional structure.
arXiv Detail & Related papers (2024-10-25T22:21:03Z) - SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning [66.27334633749734]
As language models grow in size, memory demands for backpropagation increase.
Zeroth-order (ZOZO) optimization methods offer a memory-efficient alternative.
We show that SubZero enhances fine-tuning and achieves faster results compared to standard ZOZO approaches.
arXiv Detail & Related papers (2024-10-11T17:01:43Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Score Attack: A Lower Bound Technique for Optimal Differentially Private
Learning [8.760651633031342]
We propose a novel approach called the score attack, which provides a lower bound on the differential-privacy-constrained minimax risk of parameter estimation.
It can optimally lower bound the minimax risk of estimating unknown model parameters, up to a logarithmic factor, while ensuring differential privacy for a range of statistical problems.
arXiv Detail & Related papers (2023-03-13T14:26:27Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Data-Driven Minimax Optimization with Expectation Constraints [9.373649142701803]
We propose a class of efficient primal-dual algorithms to tackle the minimax expectation-constrained problem.
We show that our algorithms converge at the optimal rate of $mathcalO(frac1sqrtN)$.
arXiv Detail & Related papers (2022-02-16T05:23:27Z) - High-Dimensional Differentially-Private EM Algorithm: Methods and
Near-Optimal Statistical Guarantees [8.089708900273804]
We develop a general framework to design differentially private expectation-maximization (EM) algorithms in high-dimensional latent variable models.
In each model, we establish the near-optimal rate of convergence with differential privacy constraints.
We propose a near rate-optimal EM algorithm with differential privacy guarantees in this setting.
arXiv Detail & Related papers (2021-04-01T04:08:34Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.