Private Adaptive Optimization with Side Information
- URL: http://arxiv.org/abs/2202.05963v1
- Date: Sat, 12 Feb 2022 03:02:06 GMT
- Title: Private Adaptive Optimization with Side Information
- Authors: Tian Li, Manzil Zaheer, Sashank J. Reddi, Virginia Smith
- Abstract summary: AdaDPS is a general framework that uses non-sensitive side information to precondition the gradients.
We show AdaDPS reduces the amount of noise needed to achieve similar privacy guarantees.
Our results show that AdaDPS improves accuracy by 7.7% (absolute) on average.
- Score: 48.91141546624768
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adaptive optimization methods have become the default solvers for many
machine learning tasks. Unfortunately, the benefits of adaptivity may degrade
when training with differential privacy, as the noise added to ensure privacy
reduces the effectiveness of the adaptive preconditioner. To this end, we
propose AdaDPS, a general framework that uses non-sensitive side information to
precondition the gradients, allowing the effective use of adaptive methods in
private settings. We formally show AdaDPS reduces the amount of noise needed to
achieve similar privacy guarantees, thereby improving optimization performance.
Empirically, we leverage simple and readily available side information to
explore the performance of AdaDPS in practice, comparing to strong baselines in
both centralized and federated settings. Our results show that AdaDPS improves
accuracy by 7.7% (absolute) on average -- yielding state-of-the-art
privacy-utility trade-offs on large-scale text and image benchmarks.
Related papers
- DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction [57.83978915843095]
This paper introduces DiSK, a novel framework designed to significantly enhance the performance of differentially private gradients.
To ensure practicality for large-scale training, we simplify the Kalman filtering process, minimizing its memory and computational demands.
arXiv Detail & Related papers (2024-10-04T19:30:39Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Online Sensitivity Optimization in Differentially Private Learning [8.12606646175019]
We present a novel approach to dynamically optimize the clipping threshold.
We treat this threshold as an additional learnable parameter, establishing a clean relationship between the threshold and the cost function.
Our method is thoroughly assessed against alternative fixed and adaptive strategies across diverse datasets, tasks, model dimensions, and privacy levels.
arXiv Detail & Related papers (2023-10-02T00:30:49Z) - DP-HyPO: An Adaptive Private Hyperparameter Optimization Framework [31.628466186344582]
We introduce DP-HyPO, a pioneering framework for adaptive'' private hyperparameter optimization.
We provide a comprehensive differential privacy analysis of our framework.
We empirically demonstrate the effectiveness of DP-HyPO on a diverse set of real-world datasets.
arXiv Detail & Related papers (2023-06-09T07:55:46Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Differentially Private Adaptive Optimization with Delayed
Preconditioners [44.190582378775694]
We explore techniques to estimate adapt geometry in training without auxiliary data.
Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially adaptively private training.
Empirically, we explore DP2, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines.
arXiv Detail & Related papers (2022-12-01T06:59:30Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.