RSCFed: Random Sampling Consensus Federated Semi-supervised Learning
- URL: http://arxiv.org/abs/2203.13993v1
- Date: Sat, 26 Mar 2022 05:10:44 GMT
- Title: RSCFed: Random Sampling Consensus Federated Semi-supervised Learning
- Authors: Xiaoxiao Liang, Yiqun Lin, Huazhu Fu, Lei Zhu, Xiaomeng Li
- Abstract summary: Federated semi-supervised learning (FSSL) aims to derive a global model by training fully-labeled and fully-unlabeled clients or training partially labeled clients.
We present a Random Sampling Consensus Federated learning, namely RSCFed, by considering the uneven reliability among models from fully-labeled clients, fully-unlabeled clients or partially labeled clients.
- Score: 40.998176838813045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated semi-supervised learning (FSSL) aims to derive a global model by
training fully-labeled and fully-unlabeled clients or training partially
labeled clients. The existing approaches work well when local clients have
independent and identically distributed (IID) data but fail to generalize to a
more practical FSSL setting, i.e., Non-IID setting. In this paper, we present a
Random Sampling Consensus Federated learning, namely RSCFed, by considering the
uneven reliability among models from fully-labeled clients, fully-unlabeled
clients or partially labeled clients. Our key motivation is that given models
with large deviations from either labeled clients or unlabeled clients, the
consensus could be reached by performing random sub-sampling over clients. To
achieve it, instead of directly aggregating local models, we first distill
several sub-consensus models by random sub-sampling over clients and then
aggregating the sub-consensus models to the global model. To enhance the
robustness of sub-consensus models, we also develop a novel distance-reweighted
model aggregation method. Experimental results show that our method outperforms
state-of-the-art methods on three benchmarked datasets, including both natural
and medical images. The code is available at
https://github.com/XMed-Lab/RSCFed.
Related papers
- Learning Unlabeled Clients Divergence for Federated Semi-Supervised Learning via Anchor Model Aggregation [10.282711631100845]
SemiAnAgg learns unlabeled client contributions via an anchor model.
SemiAnAgg achieves new state-of-the-art results on four widely used FedSemi benchmarks.
arXiv Detail & Related papers (2024-07-14T20:50:40Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - LEFL: Low Entropy Client Sampling in Federated Learning [6.436397118145477]
Federated learning (FL) is a machine learning paradigm where multiple clients collaborate to optimize a single global model using their private data.
We propose LEFL, an alternative sampling strategy by performing a one-time clustering of clients based on their model's learned high-level features.
We show of sampled clients selected with this approach yield a low relative entropy with respect to the global data distribution.
arXiv Detail & Related papers (2023-12-29T01:44:20Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedDRL: A Trustworthy Federated Learning Model Fusion Method Based on Staged Reinforcement Learning [7.846139591790014]
We propose FedDRL, a model fusion approach using reinforcement learning based on a two staged approach.
In the first stage, Our method could filter out malicious models and selects trusted client models to participate in the model fusion.
In the second stage, the FedDRL algorithm adaptively adjusts the weights of the trusted client models and aggregates the optimal global model.
arXiv Detail & Related papers (2023-07-25T17:24:32Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Personalized Federated Learning through Local Memorization [10.925242558525683]
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
Recent personalized federated learning methods train a separate model for each client while still leveraging the knowledge available at other clients.
We show on a suite of federated datasets that this approach achieves significantly higher accuracy and fairness than state-of-the-art methods.
arXiv Detail & Related papers (2021-11-17T19:40:07Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.