Exploiting Shared Representations for Personalized Federated Learning
- URL: http://arxiv.org/abs/2102.07078v3
- Date: Fri, 24 Mar 2023 22:14:19 GMT
- Title: Exploiting Shared Representations for Personalized Federated Learning
- Authors: Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
- Abstract summary: We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
- Score: 54.65133770989836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have shown the ability to extract universal feature
representations from data such as images and text that have been useful for a
variety of learning tasks. However, the fruits of representation learning have
yet to be fully-realized in federated settings. Although data in federated
settings is often non-i.i.d. across clients, the success of centralized deep
learning suggests that data often shares a global feature representation, while
the statistical heterogeneity across clients or tasks is concentrated in the
labels. Based on this intuition, we propose a novel federated learning
framework and algorithm for learning a shared data representation across
clients and unique local heads for each client. Our algorithm harnesses the
distributed computational power across clients to perform many local-updates
with respect to the low-dimensional local parameters for every update of the
representation. We prove that this method obtains linear convergence to the
ground-truth representation with near-optimal sample complexity in a linear
setting, demonstrating that it can efficiently reduce the problem dimension for
each client. This result is of interest beyond federated learning to a broad
class of problems in which we aim to learn a shared low-dimensional
representation among data distributions, for example in meta-learning and
multi-task learning. Further, extensive experimental results show the empirical
improvement of our method over alternative personalized federated learning
approaches in federated environments with heterogeneous data.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - FedAvg with Fine Tuning: Local Updates Lead to Representation Learning [54.65133770989836]
Federated Averaging (FedAvg) algorithm consists of alternating between a few local gradient updates at client nodes, followed by a model averaging update at the server.
We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks.
We also provide empirical evidence demonstrating FedAvg's representation learning ability in federated image classification with heterogeneous data.
arXiv Detail & Related papers (2022-05-27T00:55:24Z) - Distributed Unsupervised Visual Representation Learning with Fused
Features [13.935997509072669]
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
We propose a federated contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching.
It outperforms other methods by 11% on IID data and matches the performance of centralized learning.
arXiv Detail & Related papers (2021-11-21T08:36:31Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.