RC-SSFL: Towards Robust and Communication-efficient Semi-supervised
Federated Learning System
- URL: http://arxiv.org/abs/2012.04432v1
- Date: Tue, 8 Dec 2020 14:02:56 GMT
- Title: RC-SSFL: Towards Robust and Communication-efficient Semi-supervised
Federated Learning System
- Authors: Yi Liu, Xingliang Yuan, Ruihui Zhao, Yifeng Zheng, Yefeng Zheng
- Abstract summary: Federated Learning (FL) is an emerging decentralized artificial intelligence paradigm.
Current systems rely heavily on a strong assumption: all clients have a wealth of ground truth labeled data.
We present a practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL) system design.
- Score: 25.84191221776459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is an emerging decentralized artificial intelligence
paradigm, which promises to train a shared global model in high-quality while
protecting user data privacy. However, the current systems rely heavily on a
strong assumption: all clients have a wealth of ground truth labeled data,
which may not be always feasible in the real life. In this paper, we present a
practical Robust, and Communication-efficient Semi-supervised FL (RC-SSFL)
system design that can enable the clients to jointly learn a high-quality model
that is comparable to typical FL's performance. In this setting, we assume that
the client has only unlabeled data and the server has a limited amount of
labeled data. Besides, we consider malicious clients can launch poisoning
attacks to harm the performance of the global model. To solve this issue,
RC-SSFL employs a minimax optimization-based client selection strategy to
select the clients who hold high-quality updates and uses geometric median
aggregation to robustly aggregate model updates. Furthermore, RC-SSFL
implements a novel symmetric quantization method to greatly improve
communication efficiency. Extensive case studies on two real-world datasets
demonstrate that RC-SSFL can maintain the performance comparable to typical FL
in the presence of poisoning attacks and reduce communication overhead by $2
\times \sim 4 \times $.
Related papers
- BlindFL: Segmented Federated Learning with Fully Homomorphic Encryption [0.0]
Federated learning (FL) is a privacy-preserving edge-to-cloud technique used for training and deploying AI models on edge devices.
BlindFL is a framework for global model aggregation in which clients encrypt and send a subset of their local model update.
BlindFL significantly impedes client-side model poisoning attacks, a first for single-key, FHE-based FL schemes.
arXiv Detail & Related papers (2025-01-20T18:42:21Z) - Communication-Efficient Federated Learning Based on Explanation-Guided Pruning for Remote Sensing Image Classification [2.725507329935916]
We introduce an explanation-guided pruning strategy for communication-efficient Federated Learning (FL)
Our strategy effectively reduces the number of shared model updates, while increasing the ability of the global model.
The code of this work will be publicly available at https://git.tu-berlin.de/rsim/FL-LRP.
arXiv Detail & Related papers (2025-01-20T13:59:41Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.