Toward Smart Security Enhancement of Federated Learning Networks
- URL: http://arxiv.org/abs/2008.08330v1
- Date: Wed, 19 Aug 2020 08:46:39 GMT
- Title: Toward Smart Security Enhancement of Federated Learning Networks
- Authors: Junjie Tan, Ying-Chang Liang, Nguyen Cong Luong, Dusit Niyato
- Abstract summary: In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
- Score: 109.20054130698797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As traditional centralized learning networks (CLNs) are facing increasing
challenges in terms of privacy preservation, communication overheads, and
scalability, federated learning networks (FLNs) have been proposed as a
promising alternative paradigm to support the training of machine learning (ML)
models. In contrast to the centralized data storage and processing in CLNs,
FLNs exploit a number of edge devices (EDs) to store data and perform training
distributively. In this way, the EDs in FLNs can keep training data locally,
which preserves privacy and reduces communication overheads. However, since the
model training within FLNs relies on the contribution of all EDs, the training
process can be disrupted if some of the EDs upload incorrect or falsified
training results, i.e., poisoning attacks. In this paper, we review the
vulnerabilities of FLNs, and particularly give an overview of poisoning attacks
and mainstream countermeasures. Nevertheless, the existing countermeasures can
only provide passive protection and fail to consider the training fees paid for
the contributions of the EDs, resulting in a unnecessarily high training cost.
Hence, we present a smart security enhancement framework for FLNs. In
particular, a verify-before-aggregate (VBA) procedure is developed to identify
and remove the non-benign training results from the EDs. Afterward, deep
reinforcement learning (DRL) is applied to learn the behaving patterns of the
EDs and to actively select the EDs that can provide benign training results and
charge low training fees. Simulation results reveal that the proposed framework
can protect FLNs effectively and efficiently.
Related papers
- Distributed Intrusion Detection in Dynamic Networks of UAVs using Few-Shot Federated Learning [1.0923877073891446]
Intrusion detection in Flying Ad Hoc Networks (FANETs) is challenging due to communication costs, and privacy concerns.
While Federated Learning (FL) holds promise for intrusion detection in FANETs, it also faces drawbacks such as large data requirements, power consumption, and time constraints.
We propose Few-shot Federated Learning-based IDS (FSFL-IDS) to tackle intrusion detection challenges such as privacy, power constraints, communication costs, and lossy links.
arXiv Detail & Related papers (2025-01-22T20:55:46Z) - Federated Learning with Workload Reduction through Partial Training of Client Models and Entropy-Based Data Selection [3.9981390090442694]
We propose FedFT-EDS, a novel approach that combines Fine-Tuning of partial client models with Entropy-based Data Selection to reduce training workloads on edge devices.
Our experiments show that FedFT-EDS uses only 50% user data while improving the global model performance compared to baseline methods, FedAvg and FedProx.
FedFT-EDS improves client learning efficiency by up to 3 times, using one third of training time on clients to achieve an equivalent performance to the baselines.
arXiv Detail & Related papers (2024-12-30T22:47:32Z) - EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models [4.514681046629978]
We propose EDiT, an innovative Efficient Distributed Training method that combines a tailored Local SGD approach with model sharding techniques to enhance large-scale training efficiency.
We also introduce A-EDiT, a fully asynchronous variant of EDiT that accommodates heterogeneous clusters.
Experimental results demonstrate the superior performance of EDiT/A-EDiT, establishing them as robust solutions for distributed LLM training.
arXiv Detail & Related papers (2024-12-10T06:08:24Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving
for Internet of Things [4.68267059122563]
We present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers.
In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data.
We also propose a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks.
arXiv Detail & Related papers (2023-11-08T05:14:41Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - FAT: Federated Adversarial Training [5.287156503763459]
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML)
We take the first known steps towards federated adversarial training (FAT) combining both methods to reduce the threat of evasion during inference while preserving the data privacy during training.
arXiv Detail & Related papers (2020-12-03T09:47:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.