Toward Smart Security Enhancement of Federated Learning Networks
- URL: http://arxiv.org/abs/2008.08330v1
- Date: Wed, 19 Aug 2020 08:46:39 GMT
- Title: Toward Smart Security Enhancement of Federated Learning Networks
- Authors: Junjie Tan, Ying-Chang Liang, Nguyen Cong Luong, Dusit Niyato
- Abstract summary: In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
- Score: 109.20054130698797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As traditional centralized learning networks (CLNs) are facing increasing
challenges in terms of privacy preservation, communication overheads, and
scalability, federated learning networks (FLNs) have been proposed as a
promising alternative paradigm to support the training of machine learning (ML)
models. In contrast to the centralized data storage and processing in CLNs,
FLNs exploit a number of edge devices (EDs) to store data and perform training
distributively. In this way, the EDs in FLNs can keep training data locally,
which preserves privacy and reduces communication overheads. However, since the
model training within FLNs relies on the contribution of all EDs, the training
process can be disrupted if some of the EDs upload incorrect or falsified
training results, i.e., poisoning attacks. In this paper, we review the
vulnerabilities of FLNs, and particularly give an overview of poisoning attacks
and mainstream countermeasures. Nevertheless, the existing countermeasures can
only provide passive protection and fail to consider the training fees paid for
the contributions of the EDs, resulting in a unnecessarily high training cost.
Hence, we present a smart security enhancement framework for FLNs. In
particular, a verify-before-aggregate (VBA) procedure is developed to identify
and remove the non-benign training results from the EDs. Afterward, deep
reinforcement learning (DRL) is applied to learn the behaving patterns of the
EDs and to actively select the EDs that can provide benign training results and
charge low training fees. Simulation results reveal that the proposed framework
can protect FLNs effectively and efficiently.
Related papers
- Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning [1.6818869309123574]
Federated learning (FL) enables drones to train machine learning models in a decentralized manner while preserving data privacy.
Federated unlearning (FU) mitigates these risks by eliminating adversarial data contributions.
This paper proposes sky of unlearning (SoUL), a federated unlearning framework that efficiently removes the influence of unlearned data while maintaining model performance.
arXiv Detail & Related papers (2025-04-02T13:07:30Z) - Zero-Knowledge Proof-Based Consensus for Blockchain-Secured Federated Learning [22.85593588340569]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models.
Most blockchain-secured FL systems rely on conventional consensus mechanisms.
We propose a novel Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism.
arXiv Detail & Related papers (2025-03-17T15:13:10Z) - Stealing Training Data from Large Language Models in Decentralized Training through Activation Inversion Attack [53.823990570014494]
Decentralized training has become a resource-efficient framework to democratize the training of large language models (LLMs)
This paper identifies a novel and realistic attack surface: the privacy leakage from training data in decentralized training.
arXiv Detail & Related papers (2025-02-22T05:19:20Z) - EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models [4.514681046629978]
We propose EDiT, an innovative Efficient Distributed Training method that combines a tailored Local SGD approach with model sharding techniques to enhance large-scale training efficiency.
We also introduce A-EDiT, a fully asynchronous variant of EDiT that accommodates heterogeneous clusters.
Experimental results demonstrate the superior performance of EDiT/A-EDiT, establishing them as robust solutions for distributed LLM training.
arXiv Detail & Related papers (2024-12-10T06:08:24Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models [25.91643745340183]
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.
This poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods.
We propose two novel techniques for robust and efficient unlearning for LLMs.
arXiv Detail & Related papers (2024-08-13T04:18:32Z) - Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.
A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.
A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.
An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving
for Internet of Things [4.68267059122563]
We present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers.
In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data.
We also propose a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks.
arXiv Detail & Related papers (2023-11-08T05:14:41Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - FAT: Federated Adversarial Training [5.287156503763459]
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML)
We take the first known steps towards federated adversarial training (FAT) combining both methods to reduce the threat of evasion during inference while preserving the data privacy during training.
arXiv Detail & Related papers (2020-12-03T09:47:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.