FL-GUARD: A Holistic Framework for Run-Time Detection and Recovery of
Negative Federated Learning
- URL: http://arxiv.org/abs/2403.04146v1
- Date: Thu, 7 Mar 2024 01:52:05 GMT
- Title: FL-GUARD: A Holistic Framework for Run-Time Detection and Recovery of
Negative Federated Learning
- Authors: Hong Lin, Lidan Shou, Ke Chen, Gang Chen, Sai Wu
- Abstract summary: Federated learning (FL) is a promising approach for learning a model from data distributed on massive clients without exposing data privacy.
FL may fail to function appropriately when the federation is not ideal, amid an unhealthy state called Negative Federated Learning (NFL)
This paper introduces FL-GUARD, a holistic framework that can be employed on any FL system for tackling NFL in a run-time paradigm.
- Score: 20.681802937080523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a promising approach for learning a model from
data distributed on massive clients without exposing data privacy. It works
effectively in the ideal federation where clients share homogeneous data
distribution and learning behavior. However, FL may fail to function
appropriately when the federation is not ideal, amid an unhealthy state called
Negative Federated Learning (NFL), in which most clients gain no benefit from
participating in FL. Many studies have tried to address NFL. However, their
solutions either (1) predetermine to prevent NFL in the entire learning
life-cycle or (2) tackle NFL in the aftermath of numerous learning rounds.
Thus, they either (1) indiscriminately incur extra costs even if FL can perform
well without such costs or (2) waste numerous learning rounds. Additionally,
none of the previous work takes into account the clients who may be
unwilling/unable to follow the proposed NFL solutions when using those
solutions to upgrade an FL system in use. This paper introduces FL-GUARD, a
holistic framework that can be employed on any FL system for tackling NFL in a
run-time paradigm. That is, to dynamically detect NFL at the early stage (tens
of rounds) of learning and then to activate recovery measures when necessary.
Specifically, we devise a cost-effective NFL detection mechanism, which relies
on an estimation of performance gain on clients. Only when NFL is detected, we
activate the NFL recovery process, in which each client learns in parallel an
adapted model when training the global model. Extensive experiment results
confirm the effectiveness of FL-GUARD in detecting NFL and recovering from NFL
to a healthy learning state. We also show that FL-GUARD is compatible with
previous NFL solutions and robust against clients unwilling/unable to take any
recovery measures.
Related papers
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion [48.90879664138855]
One-shot Federated Learning (OFL) significantly reduces communication costs in FL by aggregating trained models only once.
However, the performance of advanced OFL methods is far behind the normal FL.
We propose a novel learning approach to endow OFL with superb performance and low communication and storage costs, termed as FuseFL.
arXiv Detail & Related papers (2024-10-27T09:07:10Z) - Hire When You Need to: Gradual Participant Recruitment for Auction-based
Federated Learning [16.83897148104]
We propose a Gradual Participant Selection scheme for Federated Learning (GPS-AFL)
GPS-AFL gradually selects the required DOs over multiple rounds of training as more information is revealed through repeated interactions.
It is designed to strike a balance between cost saving and performance enhancement, while mitigating the drawbacks of selection bias in reputation-based FL.
arXiv Detail & Related papers (2023-10-04T08:19:04Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - FAIR-BFL: Flexible and Incentive Redesign for Blockchain-based Federated
Learning [19.463891024499773]
Vanilla Federated learning (FL) relies on the centralized global aggregation mechanism and assumes that all clients are honest.
This makes it a challenge for FL to alleviate the single point of failure and dishonest clients.
We design and evaluate a novel BFL framework, and resolve the identified challenges in vanilla BFL with greater flexibility and incentive mechanism called FAIR-BFL.
arXiv Detail & Related papers (2022-06-26T15:20:45Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - A Systematic Literature Review on Federated Learning: From A Model
Quality Perspective [10.725466627592732]
Federated Learning (FL) can jointly train a global model with the data remaining locally.
This paper systematically reviews and objectively analyzes the approaches to improving the quality of FL models.
arXiv Detail & Related papers (2020-12-01T05:48:36Z) - LINDT: Tackling Negative Federated Learning with Local Adaptation [18.33409148798824]
We propose a novel framework called LINDT for tackling NFL in run-time.
We introduce a metric for detecting NFL from the server.
Experiment results show that the proposed approach can significantly improve the performance of FL on local data.
arXiv Detail & Related papers (2020-11-23T01:31:18Z) - BlockFLA: Accountable Federated Learning via Hybrid Blockchain
Architecture [11.908715869667445]
Federated Learning (FL) is a distributed, and decentralized machine learning protocol.
It has been shown that an attacker can inject backdoors to the trained model during FL.
We develop a hybrid blockchain-based FL framework that uses smart contracts to automatically detect, and punish the attackers.
arXiv Detail & Related papers (2020-10-14T22:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.