Federated Unsupervised Representation Learning
- URL: http://arxiv.org/abs/2010.08982v1
- Date: Sun, 18 Oct 2020 13:28:30 GMT
- Title: Federated Unsupervised Representation Learning
- Authors: Fengda Zhang, Kun Kuang, Zhaoyang You, Tao Shen, Jun Xiao, Yin Zhang,
Chao Wu, Yueting Zhuang, Xiaolin Li
- Abstract summary: We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
- Score: 56.715917111878106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To leverage enormous unlabeled data on distributed edge devices, we formulate
a new problem in federated learning called Federated Unsupervised
Representation Learning (FURL) to learn a common representation model without
supervision while preserving data privacy. FURL poses two new challenges: (1)
data distribution shift (Non-IID distribution) among clients would make local
models focus on different categories, leading to the inconsistency of
representation spaces. (2) without the unified information among clients in
FURL, the representations across clients would be misaligned. To address these
challenges, we propose Federated Constrastive Averaging with dictionary and
alignment (FedCA) algorithm. FedCA is composed of two key modules: (1)
dictionary module to aggregate the representations of samples from each client
and share with all clients for consistency of representation space and (2)
alignment module to align the representation of each client on a base model
trained on a public data. We adopt the contrastive loss for local model
training. Through extensive experiments with three evaluation protocols in IID
and Non-IID settings, we demonstrate that FedCA outperforms all baselines with
significant margins.
Related papers
- Rethinking the Representation in Federated Unsupervised Learning with Non-IID Data [20.432294152991954]
Federated learning achieves effective performance in modeling decentralized data.
In practice, client data are not well-labeled, which makes it potential for federated unsupervised learning (FUSL) with non-IID data.
We propose FedU2, which enhances generating uniform and unified representation in FUSL with non-IID data.
arXiv Detail & Related papers (2024-03-25T03:26:01Z) - FedCiR: Client-Invariant Representation Learning for Federated Non-IID
Features [15.555538379806135]
Federated learning (FL) is a distributed learning paradigm that maximizes the potential of data-driven models for edge devices without sharing their raw data.
We propose FedCiR, a client-invariant representation learning framework that enables clients to extract informative and client-invariant features.
arXiv Detail & Related papers (2023-08-30T06:36:32Z) - HyperFed: Hyperbolic Prototypes Exploration with Consistent Aggregation
for Non-IID Data in Federated Learning [14.503047600805436]
Federated learning (FL) collaboratively models user data in a decentralized way.
In the real world, non-identical and independent data distributions (non-IID) among clients hinder the performance of FL due to three issues, i.e., the class statistics shifting, (2) the insufficient hierarchical information utilization, and (3) the inconsistency in aggregating clients.
arXiv Detail & Related papers (2023-07-26T02:43:38Z) - FACT: Federated Adversarial Cross Training [0.0]
Federated Adrial Cross Training (FACT) uses implicit domain differences between source clients to identify domain shifts in the target domain.
We empirically show that FACT outperforms state-of-the-art federated, non-federated and source-free domain adaptation models.
arXiv Detail & Related papers (2023-06-01T12:25:43Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Federated Semi-Supervised Learning with Annotation Heterogeneity [57.12560313403097]
We propose a novel framework called Heterogeneously Annotated Semi-Supervised LEarning (HASSLE)
It is a dual-model framework with two models trained separately on labeled and unlabeled data.
The dual models can implicitly learn from both types of data across different clients, although each dual model is only trained locally on a single type of data.
arXiv Detail & Related papers (2023-03-04T16:04:49Z) - Domain Discrepancy Aware Distillation for Model Aggregation in Federated
Learning [47.87639746826555]
We describe two challenges, server-to-client discrepancy and client-to-client discrepancy, brought to the aggregation model by the domain discrepancies.
We propose an adaptive knowledge aggregation algorithm FedD3A based on domain discrepancy aware distillation to lower the bound.
arXiv Detail & Related papers (2022-10-04T04:08:16Z) - FedAvg with Fine Tuning: Local Updates Lead to Representation Learning [54.65133770989836]
Federated Averaging (FedAvg) algorithm consists of alternating between a few local gradient updates at client nodes, followed by a model averaging update at the server.
We show that the reason behind generalizability of the FedAvg's output is its power in learning the common data representation among the clients' tasks.
We also provide empirical evidence demonstrating FedAvg's representation learning ability in federated image classification with heterogeneous data.
arXiv Detail & Related papers (2022-05-27T00:55:24Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.