Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning
- URL: http://arxiv.org/abs/2309.16286v1
- Date: Thu, 28 Sep 2023 09:32:27 GMT
- Title: Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning
- Authors: Wenke Huang, Mang Ye, Zekun Shi, Bo Du
- Abstract summary: This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
- Score: 60.058083574671834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is an important privacy-preserving multi-party learning
paradigm, involving collaborative learning with others and local updating on
private data. Model heterogeneity and catastrophic forgetting are two crucial
challenges, which greatly limit the applicability and generalizability. This
paper presents a novel FCCL+, federated correlation and similarity learning
with non-target distillation, facilitating the both intra-domain
discriminability and inter-domain generalization. For heterogeneity issue, we
leverage irrelevant unlabeled public data for communication between the
heterogeneous participants. We construct cross-correlation matrix and align
instance similarity distribution on both logits and feature levels, which
effectively overcomes the communication barrier and improves the generalizable
ability. For catastrophic forgetting in local updating stage, FCCL+ introduces
Federated Non Target Distillation, which retains inter-domain knowledge while
avoiding the optimization conflict issue, fulling distilling privileged
inter-domain information through depicting posterior classes relation.
Considering that there is no standard benchmark for evaluating existing
heterogeneous federated learning under the same setting, we present a
comprehensive benchmark with extensive representative methods under four domain
shift scenarios, supporting both heterogeneous and homogeneous federated
settings. Empirical results demonstrate the superiority of our method and the
efficiency of modules on various scenarios.
Related papers
- A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning [14.466679488063217]
One-shot federated learning (FL) limits the communication between the server and clients to a single round.
We propose a unified, data-free, one-shot FL framework (FedHydra) that can effectively address both model and data heterogeneity.
arXiv Detail & Related papers (2024-10-28T15:20:52Z) - Learning Fair Invariant Representations under Covariate and Correlation Shifts Simultaneously [10.450977234741524]
We introduce a novel approach that focuses on learning a fairness-aware domain-invariant predictor.
Our approach surpasses state-of-the-art methods with respect to model accuracy as well as both group and individual fairness.
arXiv Detail & Related papers (2024-08-18T00:01:04Z) - Addressing Skewed Heterogeneity via Federated Prototype Rectification with Personalization [35.48757125452761]
Federated learning is an efficient framework designed to facilitate collaborative model training across multiple distributed devices.
A significant challenge of federated learning is data-level heterogeneity, i.e., skewed or long-tailed distribution of private data.
We propose a novel Federated Prototype Rectification with Personalization which consists of two parts: Federated Personalization and Federated Prototype Rectification.
arXiv Detail & Related papers (2024-08-15T06:26:46Z) - FedSIS: Federated Split Learning with Intermediate Representation
Sampling for Privacy-preserving Generalized Face Presentation Attack
Detection [4.1897081000881045]
Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms.
In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization.
arXiv Detail & Related papers (2023-08-20T11:49:12Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Heterogeneous Target Speech Separation [52.05046029743995]
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts.
Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts.
arXiv Detail & Related papers (2022-04-07T17:14:20Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.