FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
- URL: http://arxiv.org/abs/2207.09158v1
- Date: Tue, 19 Jul 2022 09:56:44 GMT
- Title: FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
- Authors: Sungwon Han, Sungwon Park, Fangzhao Wu, Sundong Kim, Chuhan Wu, Xing
Xie and Meeyoung Cha
- Abstract summary: FedX is an unsupervised federated learning framework.
It learns unbiased representation from decentralized and heterogeneous local data.
It can be used as an add-on module for existing unsupervised algorithms in federated settings.
- Score: 70.73383643450548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents FedX, an unsupervised federated learning framework. Our
model learns unbiased representation from decentralized and heterogeneous local
data. It employs a two-sided knowledge distillation with contrastive learning
as a core component, allowing the federated system to function without
requiring clients to share any data features. Furthermore, its adaptable
architecture can be used as an add-on module for existing unsupervised
algorithms in federated settings. Experiments show that our model improves
performance significantly (1.58--5.52pp) on five unsupervised algorithms.
Related papers
- FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - Fed2: Feature-Aligned Federated Learning [32.54574459692627]
Federated learning learns from scattered data by fusing collaborative models from local nodes.
We propose Fed2, a feature-aligned federated learning framework to resolve this issue by establishing a firm structure-feature alignment.
Fed2 could effectively enhance the federated learning convergence performance under extensive homo- and heterogeneous settings.
arXiv Detail & Related papers (2021-11-28T22:21:48Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - BCFNet: A Balanced Collaborative Filtering Network with Attention
Mechanism [106.43103176833371]
Collaborative Filtering (CF) based recommendation methods have been widely studied.
We propose a novel recommendation model named Balanced Collaborative Filtering Network (BCFNet)
In addition, an attention mechanism is designed to better capture the hidden information within implicit feedback and strengthen the learning ability of the neural network.
arXiv Detail & Related papers (2021-03-10T14:59:23Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z) - Federated Learning System without Model Sharing through Integration of
Dimensional Reduced Data Representations [6.9485501711137525]
We explore an alternative federated learning system that enables integration of dimensionality reduced representations of distributed data prior to a supervised learning task.
We compare the performance of this approach on image classification tasks to three alternative frameworks: centralized machine learning, individual machine learning, and Federated Averaging.
Our results show that our approach can achieve similar accuracy as Federated Averaging and performs better than Federated Averaging in a small-user setting.
arXiv Detail & Related papers (2020-11-13T08:12:00Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Robustness and Personalization in Federated Learning: A Unified Approach
via Regularization [4.7234844467506605]
We present a class of methods for robust, personalized federated learning, called Fed+.
The principal advantage of Fed+ is to better accommodate the real-world characteristics found in federated training.
We demonstrate the benefits of Fed+ through extensive experiments on benchmark datasets.
arXiv Detail & Related papers (2020-09-14T10:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.