Federated Self-supervised Learning for Heterogeneous Clients
- URL: http://arxiv.org/abs/2205.12493v2
- Date: Thu, 26 May 2022 05:03:24 GMT
- Title: Federated Self-supervised Learning for Heterogeneous Clients
- Authors: Disha Makhija, Nhat Ho, Joydeep Ghosh
- Abstract summary: We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
- Score: 20.33482170846688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning has become an important learning paradigm due to its
privacy and computational benefits. As the field advances, two key challenges
that still remain to be addressed are: (1) system heterogeneity - variability
in the compute and/or data resources present on each client, and (2) lack of
labeled data in certain federated settings. Several recent developments have
tried to overcome these challenges independently. In this work, we propose a
unified and systematic framework, \emph{Heterogeneous Self-supervised Federated
Learning} (Hetero-SSFL) for enabling self-supervised learning with federation
on heterogeneous clients. The proposed framework allows collaborative
representation learning across all the clients without imposing architectural
constraints or requiring presence of labeled data. The key idea in Hetero-SSFL
is to let each client train its unique self-supervised model and enable the
joint learning across clients by aligning the lower dimensional representations
on a common dataset. The entire training procedure could be viewed as self and
peer-supervised as both the local training and the alignment procedures do not
require presence of any labeled data. As in conventional self-supervised
learning, the obtained client models are task independent and can be used for
varied end-tasks. We provide a convergence guarantee of the proposed framework
for non-convex objectives in heterogeneous settings and also empirically
demonstrate that our proposed approach outperforms the state of the art methods
by a significant margin.
Related papers
- Task-Agnostic Federated Learning [4.041327615026293]
This study addresses task-agnostic and generalization problem on un-seen tasks by adapting self-supervised FL framework.
utilizing Vision Transformer (ViT) as consensus feature encoder for self-supervised pre-training, no initial labels required, the framework enabling effective representation learning across diverse datasets and tasks.
arXiv Detail & Related papers (2024-06-25T02:53:37Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - A Closer Look at Personalization in Federated Image Classification [33.27317065917578]
Federated Learning (FL) is developed to learn a single global model across the decentralized data.
This paper shows that it is possible to achieve flexible personalization after the convergence of the global model.
We propose RepPer, an independent two-stage personalized FL framework.
arXiv Detail & Related papers (2022-04-22T06:32:18Z) - PerFED-GAN: Personalized Federated Learning via Generative Adversarial
Networks [46.17495529441229]
Federated learning is a distributed machine learning method that can be used to deploy AI-dependent IoT applications.
This paper proposes a federated learning method based on co-training and generative adversarial networks(GANs)
In our experiments, the proposed method outperforms the existing methods in mean test accuracy by 42% when the client's model architecture and data distribution vary significantly.
arXiv Detail & Related papers (2022-02-18T12:08:46Z) - Factorized-FL: Agnostic Personalized Federated Learning with Kernel
Factorization & Similarity Matching [70.95184015412798]
In real-world federated learning scenarios, participants could have their own personalized labels which are incompatible with those from other clients.
We introduce Factorized-FL, which allows to effectively tackle label- and task-heterogeneous federated learning settings.
We extensively validate our method on both label- and domain-heterogeneous settings, on which it outperforms the state-of-the-art personalized federated learning methods.
arXiv Detail & Related papers (2022-02-01T08:00:59Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation [42.05438626702343]
This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
arXiv Detail & Related papers (2021-09-29T02:13:22Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.