Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning
- URL: http://arxiv.org/abs/2211.07364v1
- Date: Mon, 14 Nov 2022 13:59:50 GMT
- Title: Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning
- Authors: Yi Liu, Song Guo, Jie Zhang, Qihua Zhou, Yingchun Wang and Xiaohan
Zhao
- Abstract summary: We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
- Score: 19.505644178449046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To eliminate the requirement of fully-labeled data for supervised model
training in traditional Federated Learning (FL), extensive attention has been
paid to the application of Self-supervised Learning (SSL) approaches on FL to
tackle the label scarcity problem. Previous works on Federated SSL generally
fall into two categories: parameter-based model aggregation (i.e., FedAvg,
applicable to homogeneous cases) or data-based feature sharing (i.e., knowledge
distillation, applicable to heterogeneous cases) to achieve knowledge transfer
among multiple unlabeled clients. Despite the progress, all of them inevitably
rely on some assumptions, such as homogeneous models or the existence of an
additional public dataset, which hinder the universality of the training
frameworks for more general scenarios. Therefore, in this paper, we propose a
novel and general method named Federated Self-supervised Learning with
Feature-correlation based Aggregation (FedFoA) to tackle the above limitations
in a communication-efficient and privacy-preserving manner. Our insight is to
utilize feature correlation to align the feature mappings and calibrate the
local model updates across clients during their local training process. More
specifically, we design a factorization-based method to extract the
cross-feature relation matrix from the local representations. Then, the
relation matrix can be regarded as a carrier of semantic information to perform
the aggregation phase. We prove that FedFoA is a model-agnostic training
framework and can be easily compatible with state-of-the-art unsupervised FL
methods. Extensive empirical experiments demonstrate that our proposed approach
outperforms the state-of-the-art methods by a significant margin.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fed-CO2: Cooperation of Online and Offline Models for Severe Data
Heterogeneity in Federated Learning [14.914477928398133]
Federated Learning (FL) has emerged as a promising distributed learning paradigm.
The effectiveness of FL is highly dependent on the quality of the data that is being used for training.
We propose Fed-CO$_2$, a universal FL framework that handles both label distribution skew and feature skew.
arXiv Detail & Related papers (2023-12-21T15:12:12Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - FSAR: Federated Skeleton-based Action Recognition with Adaptive Topology
Structure and Knowledge Distillation [23.0771949978506]
Existing skeleton-based action recognition methods typically follow a centralized learning paradigm, which can pose privacy concerns when exposing human-related videos.
We introduce a novel Federated Skeleton-based Action Recognition (FSAR) paradigm, which enables the construction of a globally generalized model without accessing local sensitive data.
arXiv Detail & Related papers (2023-06-19T16:18:14Z) - FedGen: Generalizable Federated Learning for Sequential Data [8.784435748969806]
In many real-world distributed settings, spurious correlations exist due to biases and data sampling issues.
We present a generalizable federated learning framework called FedGen, which allows clients to identify and distinguish between spurious and invariant features.
We show that FedGen results in models that achieve significantly better generalization and can outperform the accuracy of current federated learning approaches by over 24%.
arXiv Detail & Related papers (2022-11-03T15:48:14Z) - Exploring Semantic Attributes from A Foundation Model for Federated
Learning of Disjoint Label Spaces [46.59992662412557]
In this work, we consider transferring mid-level semantic knowledge (such as attribute) which is not sensitive to specific objects of interest.
We formulate a new Federated Zero-Shot Learning (FZSL) paradigm to learn mid-level semantic knowledge at multiple local clients.
To improve model discriminative ability, we propose to explore semantic knowledge augmentation from external knowledge.
arXiv Detail & Related papers (2022-08-29T10:05:49Z) - CDKT-FL: Cross-Device Knowledge Transfer using Proxy Dataset in Federated Learning [27.84845136697669]
We develop a novel knowledge distillation-based approach to study the extent of knowledge transfer between the global model and local models.
We show the proposed method achieves significant speedups and high personalized performance of local models.
arXiv Detail & Related papers (2022-04-04T14:49:19Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.