From Noisy Fixed-Point Iterations to Private ADMM for Centralized and
Federated Learning
- URL: http://arxiv.org/abs/2302.12559v3
- Date: Wed, 12 Jul 2023 16:09:56 GMT
- Title: From Noisy Fixed-Point Iterations to Private ADMM for Centralized and
Federated Learning
- Authors: Edwige Cyffers, Aur\'elien Bellet, Debabrota Basu
- Abstract summary: We study differentially private (DP) machine learning algorithms as instances of noisy fixed-point iterations.
We establish strong privacy guarantees leveraging privacy amplification by and by subsampling.
We provide utility guarantees using a unified analysis that exploits a recent linear convergence result for noisy fixed-point iterations.
- Score: 4.202534541804858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study differentially private (DP) machine learning algorithms as instances
of noisy fixed-point iterations, in order to derive privacy and utility results
from this well-studied framework. We show that this new perspective recovers
popular private gradient-based methods like DP-SGD and provides a principled
way to design and analyze new private optimization algorithms in a flexible
manner. Focusing on the widely-used Alternating Directions Method of
Multipliers (ADMM) method, we use our general framework to derive novel private
ADMM algorithms for centralized, federated and fully decentralized learning.
For these three algorithms, we establish strong privacy guarantees leveraging
privacy amplification by iteration and by subsampling. Finally, we provide
utility guarantees using a unified analysis that exploits a recent linear
convergence result for noisy fixed-point iterations.
Related papers
- Federated Cubic Regularized Newton Learning with Sparsification-amplified Differential Privacy [10.396575601912673]
We introduce a federated learning algorithm called Differentially Private Federated Cubic Regularized Newton (DP-FCRN)
By leveraging second-order techniques, our algorithm achieves lower iteration complexity compared to first-order methods.
We also incorporate noise perturbation during local computations to ensure privacy.
arXiv Detail & Related papers (2024-08-08T08:48:54Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Dynamic Privacy Allocation for Locally Differentially Private Federated
Learning with Composite Objectives [10.528569272279999]
This paper proposes a differentially private federated learning algorithm for strongly convex but possibly nonsmooth problems.
The proposed algorithm adds artificial noise to the shared information to ensure privacy and dynamically allocates the time-varying noise variance to minimize an upper bound of the optimization error.
Numerical results show the superiority of the proposed algorithm over state-of-the-art methods.
arXiv Detail & Related papers (2023-08-02T13:30:33Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Multi-Message Shuffled Privacy in Federated Learning [2.6778110563115542]
We study differentially private distributed optimization under communication constraints.
A server using SGD for optimization aggregates the client-side local gradients for model updates using distributed mean estimation (DME)
We develop a communication-efficient private DME, using the recently developed multi-message shuffled (MMS) privacy framework.
arXiv Detail & Related papers (2023-02-22T05:23:52Z) - Differentially Private Stochastic Gradient Descent with Low-Noise [49.981789906200035]
Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection.
This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy.
arXiv Detail & Related papers (2022-09-09T08:54:13Z) - Differentially Private Learning with Margin Guarantees [48.83724068578305]
We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees.
For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees.
We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses.
arXiv Detail & Related papers (2022-04-21T19:12:06Z) - Adaptive Differentially Private Empirical Risk Minimization [95.04948014513226]
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization.
We prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added.
arXiv Detail & Related papers (2021-10-14T15:02:20Z) - Local Differential Privacy for Bayesian Optimization [12.05395706770007]
We consider a black-box optimization in the nonparametric Gaussian process setting with local differential privacy (LDP) guarantee.
Specifically, the rewards from each user are further corrupted to protect privacy and the learner only has access to the corrupted rewards to minimize the regret.
We present three almost optimal algorithms based on the GP-UCB framework and Laplace DP mechanism.
arXiv Detail & Related papers (2020-10-13T21:50:09Z) - Tighter Generalization Bounds for Iterative Differentially Private
Learning Algorithms [95.73230376153872]
This paper studies the relationship between generalization and privacy preservation in iterative learning algorithms by two sequential steps.
We prove that $(varepsilon, delta)$-differential privacy implies an on-average generalization bound for multi-Database learning algorithms.
We then investigate how the iterative nature shared by most learning algorithms influence privacy preservation and further generalization.
arXiv Detail & Related papers (2020-07-18T09:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.