On Dynamic Noise Influence in Differentially Private Learning
- URL: http://arxiv.org/abs/2101.07413v1
- Date: Tue, 19 Jan 2021 02:04:00 GMT
- Title: On Dynamic Noise Influence in Differentially Private Learning
- Authors: Junyuan Hong and Zhangyang Wang and Jiayu Zhou
- Abstract summary: Private Gradient Descent (PGD) is a commonly used private learning framework, which noises based on the Differential protocol.
Recent studies show that emphdynamic privacy schedules can improve at the final iteration, yet yet theoreticals of the effectiveness of such schedules remain limited.
This paper provides comprehensive analysis of noise influence in dynamic privacy schedules to answer these critical questions.
- Score: 102.6791870228147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Protecting privacy in learning while maintaining the model performance has
become increasingly critical in many applications that involve sensitive data.
Private Gradient Descent (PGD) is a commonly used private learning framework,
which noises gradients based on the Differential Privacy protocol. Recent
studies show that \emph{dynamic privacy schedules} of decreasing noise
magnitudes can improve loss at the final iteration, and yet theoretical
understandings of the effectiveness of such schedules and their connections to
optimization algorithms remain limited. In this paper, we provide comprehensive
analysis of noise influence in dynamic privacy schedules to answer these
critical questions. We first present a dynamic noise schedule minimizing the
utility upper bound of PGD, and show how the noise influence from each
optimization step collectively impacts utility of the final model. Our study
also reveals how impacts from dynamic noise influence change when momentum is
used. We empirically show the connection exists for general non-convex losses,
and the influence is greatly impacted by the loss curvature.
Related papers
- Towards Robust Transcription: Exploring Noise Injection Strategies for Training Data Augmentation [55.752737615873464]
This study investigates the impact of white noise at various Signal-to-Noise Ratio (SNR) levels on state-of-the-art APT models.
We hope this research provides valuable insights as preliminary work toward developing transcription models that maintain consistent performance across a range of acoustic conditions.
arXiv Detail & Related papers (2024-10-18T02:31:36Z) - Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation [33.795122935686706]
We propose ANADP, a novel algorithm that adaptively allocates additive noise based on the importance of model parameters.
We demonstrate that ANADP narrows the performance gap between regular fine-tuning and traditional DP fine-tuning on a series of datasets.
arXiv Detail & Related papers (2024-10-03T19:02:50Z) - Differentially Private Online Federated Learning with Correlated Noise [8.349938538355772]
We introduce a novel differentially private algorithm for online federated learning that employs temporally correlated noise to enhance utility.
We demonstrate how the drift errors from local updates can be effectively managed under a quasi-strong convexity condition.
arXiv Detail & Related papers (2024-03-25T08:35:19Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to
Improve Generalization, Stability, and Privacy in Federated Learning [0.0]
This study investigates the privacy, generalization, and stability of deep learning models in the presence of additive noise.
We use Signal-to-Noise Ratio (SNR) as a measure of the trade-off between privacy and training accuracy of noise-infused models.
By leveraging noise as a tool for regularization and privacy enhancement, we aim to contribute to the development of robust, privacy-aware algorithms.
arXiv Detail & Related papers (2023-11-09T23:36:18Z) - Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks [91.15120211190519]
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
We propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise.
arXiv Detail & Related papers (2023-09-29T06:18:15Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Gradient Descent with Linearly Correlated Noise: Theory and Applications
to Differential Privacy [17.81999485513265]
We study gradient descent under linearly correlated noise.
We use our results to develop new, effective matrix factorizations for differentially private optimization.
arXiv Detail & Related papers (2023-02-02T23:32:24Z) - Inference and Denoise: Causal Inference-based Neural Speech Enhancement [83.4641575757706]
This study addresses the speech enhancement (SE) task within the causal inference paradigm by modeling the noise presence as an intervention.
The proposed causal inference-based speech enhancement (CISE) separates clean and noisy frames in an intervened noisy speech using a noise detector and assigns both sets of frames to two mask-based enhancement modules (EMs) to perform noise-conditional SE.
arXiv Detail & Related papers (2022-11-02T15:03:50Z) - Action Noise in Off-Policy Deep Reinforcement Learning: Impact on
Exploration and Performance [5.573543601558405]
We analyze how the learned policy is impacted by the noise type, noise scale, and impact scaling factor reduction schedule.
We consider the two most prominent types of action noise, Ornstein-Uhlenbeck noise, and perform a vast experimental campaign.
We conclude that the best noise type and scale are environment dependent, and based on our observations derive rules for guiding the choice of the action noise.
arXiv Detail & Related papers (2022-06-08T10:06:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.