Linear Speedup in Personalized Collaborative Learning
- URL: http://arxiv.org/abs/2111.05968v1
- Date: Wed, 10 Nov 2021 22:12:52 GMT
- Title: Linear Speedup in Personalized Collaborative Learning
- Authors: El Mahdi Chayti, Sai Praneeth Karimireddy, Sebastian U. Stich, Nicolas
Flammarion, and Martin Jaggi
- Abstract summary: Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias.
We formalize the personalized collaborative learning problem as optimization of a user's objective.
We explore conditions under which we can optimally trade-off their bias for a reduction in variance.
- Score: 69.45124829480106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalization in federated learning can improve the accuracy of a model for
a user by trading off the model's bias (introduced by using data from other
users who are potentially different) against its variance (due to the limited
amount of data on any single user). In order to develop training algorithms
that optimally balance this trade-off, it is necessary to extend our
theoretical foundations. In this work, we formalize the personalized
collaborative learning problem as stochastic optimization of a user's objective
$f_0(x)$ while given access to $N$ related but different objectives of other
users $\{f_1(x), \dots, f_N(x)\}$. We give convergence guarantees for two
algorithms in this setting -- a popular personalization method known as
\emph{weighted gradient averaging}, and a novel \emph{bias correction} method
-- and explore conditions under which we can optimally trade-off their bias for
a reduction in variance and achieve linear speedup w.r.t.\ the number of users
$N$. Further, we also empirically study their performance confirming our
theoretical insights.
Related papers
- Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning [12.742158403867002]
Reinforcement Learning from Human Feedback is a powerful paradigm for aligning foundation models to human values and preferences.
Current RLHF techniques cannot account for the naturally occurring differences in individual human preferences across a diverse population.
We develop a class of multimodal RLHF methods to address the need for pluralistic alignment.
arXiv Detail & Related papers (2024-08-19T15:18:30Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Individual Fairness Guarantees for Neural Networks [0.0]
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs)
We work with the $epsilon$-$delta$-IF formulation, which requires that the output difference between any pair of $epsilon$-similar individuals is bounded by a maximum decision tolerance.
We show how this formulation can be used to encourage models' fairness at training time by modifying the NN loss, and empirically confirm our approach yields NNs that are orders of magnitude fairer than state-of-the-art methods.
arXiv Detail & Related papers (2022-05-11T20:21:07Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Tight and Robust Private Mean Estimation with Few Users [16.22135057266913]
We study high-dimensional mean estimation under user-level differential privacy.
We design an $(eps,delta)$-differentially private mechanism using as few users as possible.
arXiv Detail & Related papers (2021-10-22T16:02:21Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.