Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems
- URL: http://arxiv.org/abs/2407.12396v1
- Date: Wed, 17 Jul 2024 08:19:58 GMT
- Title: Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems
- Authors: Roie Reshef, Kfir Y. Levy,
- Abstract summary: This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems.
We devise methods that ensure Differential Privacy (DP) while maintaining optimal convergence rates for homogeneous and heterogeneous data distributions.
- Score: 8.419845742978985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems, focusing on both trusted and untrusted server scenarios. We analyze this setting within the Stochastic Convex Optimization (SCO) framework, and devise methods that ensure Differential Privacy (DP) while maintaining optimal convergence rates for homogeneous and heterogeneous data distributions. Our approach, based on a recent stochastic optimization technique, offers linear computational complexity, comparable to non-private FL methods, and reduced gradient obfuscation. This work enhances the practicality of DP in FL, balancing privacy, efficiency, and robustness in a variety of server trust environment.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Balancing Privacy and Performance for Private Federated Learning
Algorithms [4.681076651230371]
Federated learning (FL) is a distributed machine learning framework where multiple clients collaborate to train a model without exposing their private data.
FL algorithms frequently employ a differential privacy mechanism that introduces noise into each client's model updates before sharing.
We show that an optimal balance exists between the number of local steps and communication rounds, one that maximizes the convergence performance within a given privacy budget.
arXiv Detail & Related papers (2023-04-11T10:42:11Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Quantization enabled Privacy Protection in Decentralized Stochastic
Optimization [34.24521534464185]
Decentralized optimization can be used in areas as diverse as machine learning, control, and sensor networks.
Privacy protection has emerged as a crucial need in the implementation of decentralized optimization.
We propose an algorithm that is able to guarantee provable convergence accuracy even in the presence of aggressive quantization errors.
arXiv Detail & Related papers (2022-08-07T15:17:23Z) - Accelerated Federated Learning with Decoupled Adaptive Optimization [53.230515878096426]
federated learning (FL) framework enables clients to collaboratively learn a shared model while keeping privacy of training data on clients.
Recently, many iterations efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings.
This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs)
arXiv Detail & Related papers (2022-07-14T22:46:43Z) - Bring Your Own Algorithm for Optimal Differentially Private Stochastic
Minimax Optimization [44.52870407321633]
holy grail of these settings is to guarantee the optimal trade-off between the privacy and the excess population loss.
We provide a general framework for solving differentially private minimax optimization (DP-SMO) problems.
Our framework is inspired from the recently proposed Phased-ERM method [20] for nonsmooth differentially private convex optimization (DP-SCO)
arXiv Detail & Related papers (2022-06-01T10:03:20Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.