Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation
- URL: http://arxiv.org/abs/2109.14611v1
- Date: Wed, 29 Sep 2021 02:13:22 GMT
- Title: Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation
- Authors: Haizhou Shi, Youcai Zhang, Zijin Shen, Siliang Tang, Yaqian Li,
Yandong Guo, Yueting Zhuang
- Abstract summary: This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
- Score: 42.05438626702343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the feasibility of learning good representation space
with unlabeled client data in the federated scenario. Existing works trivially
inherit the supervised federated learning methods, which does not apply to the
model heterogeneity and has the potential risk of privacy exposure. To tackle
the problems above, we first identify that self-supervised contrastive local
training is more robust against the non-i.i.d.-ness than the traditional
supervised learning paradigm. Then we propose a novel federated self-supervised
contrastive learning framework FLESD that supports architecture-agnostic local
training and communication-efficient global aggregation. At each round of
communication, the server first gathers a fraction of the clients' inferred
similarity matrices on a public dataset. Then FLESD ensembles the similarity
matrices and trains the global model via similarity distillation. We verify the
effectiveness of our proposed framework by a series of empirical experiments
and show that FLESD has three main advantages over the existing methods: it
handles the model heterogeneity, is less prone to privacy leak, and is more
communication-efficient. We will release the code of this paper in the future.
Related papers
- Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Federated Self-supervised Learning for Heterogeneous Clients [20.33482170846688]
We propose a unified and systematic framework, emphHeterogeneous Self-supervised Federated Learning (Hetero-SSFL) for enabling self-supervised learning with federation on heterogeneous clients.
The proposed framework allows representation learning across all the clients without imposing architectural constraints or requiring presence of labeled data.
We empirically demonstrate that our proposed approach outperforms the state of the art methods by a significant margin.
arXiv Detail & Related papers (2022-05-25T05:07:44Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - One-shot Federated Learning without Server-side Training [42.59845771101823]
One-shot federated learning is gaining popularity as a way to reduce communication cost between clients and the server.
Most of the existing one-shot FL methods are based on Knowledge Distillation; however, distillation based approach requires an extra training phase and depends on publicly available data sets or generated pseudo samples.
In this work, we consider a novel and challenging cross-silo setting: performing a single round of parameter aggregation on the local models without server-side training.
arXiv Detail & Related papers (2022-04-26T01:45:37Z) - Practical One-Shot Federated Learning for Cross-Silo Setting [114.76232507580067]
One-shot federated learning is a promising approach to make federated learning applicable in cross-silo setting.
We propose a practical one-shot federated learning algorithm named FedKT.
By utilizing the knowledge transfer technique, FedKT can be applied to any classification models and can flexibly achieve differential privacy guarantees.
arXiv Detail & Related papers (2020-10-02T14:09:10Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.