Navigating Alignment for Non-identical Client Class Sets: A Label
Name-Anchored Federated Learning Framework
- URL: http://arxiv.org/abs/2301.00489v2
- Date: Tue, 6 Jun 2023 05:30:05 GMT
- Title: Navigating Alignment for Non-identical Client Class Sets: A Label
Name-Anchored Federated Learning Framework
- Authors: Jiayun Zhang, Xiyuan Zhang, Xinyang Zhang, Dezhi Hong, Rajesh K.
Gupta, Jingbo Shang
- Abstract summary: FedAlign is a novel framework to align latent spaces across clients from both label and data perspectives.
From a label perspective, we leverage the expressive natural language class names as a common ground for label encoders to anchor class representations.
From a data perspective, we regard the global class representations as anchors and leverage the data points that are close/far enough to the anchors of locally-unaware classes to align the data encoders across clients.
- Score: 26.902679793955972
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional federated classification methods, even those designed for non-IID
clients, assume that each client annotates its local data with respect to the
same universal class set. In this paper, we focus on a more general yet
practical setting, non-identical client class sets, where clients focus on
their own (different or even non-overlapping) class sets and seek a global
model that works for the union of these classes. If one views classification as
finding the best match between representations produced by data/label encoder,
such heterogeneity in client class sets poses a new significant challenge --
local encoders at different clients may operate in different and even
independent latent spaces, making it hard to aggregate at the server. We
propose a novel framework, FedAlign, to align the latent spaces across clients
from both label and data perspectives. From a label perspective, we leverage
the expressive natural language class names as a common ground for label
encoders to anchor class representations and guide the data encoder learning
across clients. From a data perspective, during local training, we regard the
global class representations as anchors and leverage the data points that are
close/far enough to the anchors of locally-unaware classes to align the data
encoders across clients. Our theoretical analysis of the generalization
performance and extensive experiments on four real-world datasets of different
tasks confirm that FedAlign outperforms various state-of-the-art (non-IID)
federated classification methods.
Related papers
- FUNAvg: Federated Uncertainty Weighted Averaging for Datasets with Diverse Labels [37.20677220716839]
We propose to learn a joint backbone in a federated manner.
We observe that the different segmentation heads although only trained on the individual client's labels also learn information about the other labels not present at the respective site.
With our method, which we refer to as FUNAvg, we are even on-par with the models trained and tested on the same dataset on average.
arXiv Detail & Related papers (2024-07-10T09:23:55Z) - Federated Learning with Only Positive Labels by Exploring Label Correlations [78.59613150221597]
Federated learning aims to collaboratively learn a model by using the data from multiple users under privacy constraints.
In this paper, we study the multi-label classification problem under the federated learning setting.
We propose a novel and generic method termed Federated Averaging by exploring Label Correlations (FedALC)
arXiv Detail & Related papers (2024-04-24T02:22:50Z) - Federated Deep Multi-View Clustering with Global Self-Supervision [51.639891178519136]
Federated multi-view clustering has the potential to learn a global clustering model from data distributed across multiple devices.
In this setting, label information is unknown and data privacy must be preserved.
We propose a novel federated deep multi-view clustering method that can mine complementary cluster structures from multiple clients.
arXiv Detail & Related papers (2023-09-24T17:07:01Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - Dual Class-Aware Contrastive Federated Semi-Supervised Learning [9.742389743497045]
We present a novel Federated Semi-Supervised Learning (FSSL) method called Dual Class-aware Contrastive Federated Semi-Supervised Learning (DCCFSSL)
By implementing a dual class-aware contrastive module, DCCFSSL establishes a unified training objective for different clients to tackle large deviations.
Our experiments show that DCCFSSL outperforms current state-of-the-art methods on three benchmark datasets.
arXiv Detail & Related papers (2022-11-16T13:54:31Z) - Cross-domain Federated Object Detection [43.66352018668227]
Federated learning can enable multi-party collaborative learning without leaking client data.
We propose a cross-domain federated object detection framework, named FedOD.
arXiv Detail & Related papers (2022-06-30T03:09:59Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Federated Semi-Supervised Learning with Inter-Client Consistency &
Disjoint Learning [78.88007892742438]
We study two essential scenarios of Federated Semi-Supervised Learning (FSSL) based on the location of the labeled data.
We propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch)
arXiv Detail & Related papers (2020-06-22T09:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.