Private Non-Convex Federated Learning Without a Trusted Server
- URL: http://arxiv.org/abs/2203.06735v3
- Date: Sun, 25 Jun 2023 17:49:27 GMT
- Title: Private Non-Convex Federated Learning Without a Trusted Server
- Authors: Andrew Lowy, Ali Ghafelebashi, Meisam Razaviyayn
- Abstract summary: We propose novel algorithms for cross-silo learning (FL) with non-trusted loss functions and data from people who do not trust other silos.
Our algorithms attain the optimal strongly convex, homogeneous (i.i.d.) for ISRL-DP FL without assuming convexity or i.i.d. data.
Numerical experiments show that our algorithm has better accuracy than baselines for most privacy levels.
- Score: 7.971065005161566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study federated learning (FL) -- especially cross-silo FL -- with
non-convex loss functions and data from people who do not trust the server or
other silos. In this setting, each silo (e.g. hospital) must protect the
privacy of each person's data (e.g. patient's medical record), even if the
server or other silos act as adversarial eavesdroppers. To that end, we
consider inter-silo record-level (ISRL) differential privacy (DP), which
requires silo~$i$'s communications to satisfy record/item-level DP. We propose
novel ISRL-DP algorithms for FL with heterogeneous (non-i.i.d.) silo data and
two classes of Lipschitz continuous loss functions: First, we consider losses
satisfying the Proximal Polyak-Lojasiewicz (PL) inequality, which is an
extension of the classical PL condition to the constrained setting. In contrast
to our result, prior works only considered unconstrained private optimization
with Lipschitz PL loss, which rules out most interesting PL losses such as
strongly convex problems and linear/logistic regression. Our algorithms nearly
attain the optimal strongly convex, homogeneous (i.i.d.) rate for ISRL-DP FL
without assuming convexity or i.i.d. data. Second, we give the first private
algorithms for non-convex non-smooth loss functions. Our utility bounds even
improve on the state-of-the-art bounds for smooth losses. We complement our
upper bounds with lower bounds. Additionally, we provide shuffle DP (SDP)
algorithms that improve over the state-of-the-art central DP algorithms under
more practical trust assumptions. Numerical experiments show that our algorithm
has better accuracy than baselines for most privacy levels. All the codes are
publicly available at:
https://github.com/ghafeleb/Private-NonConvex-Federated-Learning-Without-a-Trusted-Server.
Related papers
- Faster Algorithms for User-Level Private Stochastic Convex Optimization [16.59551503680919]
We study private convex optimization (SCO) under user-level differential privacy constraints.
Existing algorithms for user-level DP SCO are impractical in many large-scale machine learning scenarios.
We provide novel user-level DP algorithms with state-of-the-art excess risk and runtime guarantees.
arXiv Detail & Related papers (2024-10-24T03:02:33Z) - Private Heterogeneous Federated Learning Without a Trusted Server Revisited: Error-Optimal and Communication-Efficient Algorithms for Convex Losses [12.620782629498812]
Inter-Silo Record-Level Differential Privacy (ISRL-DP) prevents each silo's data from being leaked.
We provide novel ISRL-DP FL algorithms that achieve the optimal excess risk bounds in the presence of heterogeneous silo data.
arXiv Detail & Related papers (2024-07-12T21:20:44Z) - Efficient Sparse Least Absolute Deviation Regression with Differential
Privacy [10.414082115202003]
We develop a fast privacy-preserving learning solution for a sparse robust regression problem.
Our algorithm achieves a fast estimation by reformulating the sparse LAD problem as a penalized least square estimation problem.
We show that our algorithm can achieve better privacy and statistical accuracy trade-off compared with the state-of-the-art privacy-preserving regression algorithms.
arXiv Detail & Related papers (2024-01-02T17:13:34Z) - Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage [100.8180383245813]
We propose value-based algorithms for offline reinforcement learning (RL)
We show an analogous result for vanilla Q-functions under a soft margin condition.
Our algorithms' loss functions arise from casting the estimation problems as nonlinear convex optimization problems and Lagrangifying.
arXiv Detail & Related papers (2023-02-05T14:22:41Z) - Selective Network Linearization for Efficient Private Inference [49.937470642033155]
We propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
The results demonstrate up to $4.25%$ more accuracy (iso-ReLU count at 50K) or $2.2times$ less latency (iso-accuracy at 70%) than the current state of the art.
arXiv Detail & Related papers (2022-02-04T19:00:24Z) - Locally Differentially Private Reinforcement Learning for Linear Mixture
Markov Decision Processes [78.27542864367821]
Reinforcement learning (RL) algorithms can be used to provide personalized services, which rely on users' private and sensitive data.
To protect the users' privacy, privacy-preserving RL algorithms are in demand.
We propose a novel $(varepsilon, delta)$-LDP algorithm for learning a class of Markov decision processes (MDPs) dubbed linear mixture MDPs.
arXiv Detail & Related papers (2021-10-19T17:44:09Z) - Private Federated Learning Without a Trusted Server: Optimal Algorithms
for Convex Losses [9.416757363901295]
Inter-Silo Record-Level Differential Privacy (ISRL-DP)
Inter-Silo Record-Level Differential Privacy (ISRL-DP)
arXiv Detail & Related papers (2021-06-17T19:41:23Z) - Output Perturbation for Differentially Private Convex Optimization with
Improved Population Loss Bounds, Runtimes and Applications to Private
Adversarial Training [12.386462516398469]
Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning.
We provide the tightest known $(epsilon, 0)$-DP population loss bounds and fastest runtimes under the presence of smoothness and strong convexity.
We apply our theory to two learning frameworks: tilted ERM and adversarial learning frameworks.
arXiv Detail & Related papers (2021-02-09T08:47:06Z) - Is Pessimism Provably Efficient for Offline RL? [104.00628430454479]
We study offline reinforcement learning (RL), which aims to learn an optimal policy based on a dataset collected a priori.
We propose a pessimistic variant of the value iteration algorithm (PEVI), which incorporates an uncertainty quantifier as the penalty function.
arXiv Detail & Related papers (2020-12-30T09:06:57Z) - SGD with shuffling: optimal rates without component convexity and large
epoch requirements [60.65928290219793]
We consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once)
We establish minimax optimal convergence rates of these algorithms up to poly-log factor gaps.
We further sharpen the tight convergence results for RandomShuffle by removing the drawbacks common to all prior arts.
arXiv Detail & Related papers (2020-06-12T05:00:44Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.