Toward Smart Security Enhancement of Federated Learning Networks
- URL: http://arxiv.org/abs/2008.08330v1
- Date: Wed, 19 Aug 2020 08:46:39 GMT
- Title: Toward Smart Security Enhancement of Federated Learning Networks
- Authors: Junjie Tan, Ying-Chang Liang, Nguyen Cong Luong, Dusit Niyato
- Abstract summary: In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
- Score: 109.20054130698797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As traditional centralized learning networks (CLNs) are facing increasing
challenges in terms of privacy preservation, communication overheads, and
scalability, federated learning networks (FLNs) have been proposed as a
promising alternative paradigm to support the training of machine learning (ML)
models. In contrast to the centralized data storage and processing in CLNs,
FLNs exploit a number of edge devices (EDs) to store data and perform training
distributively. In this way, the EDs in FLNs can keep training data locally,
which preserves privacy and reduces communication overheads. However, since the
model training within FLNs relies on the contribution of all EDs, the training
process can be disrupted if some of the EDs upload incorrect or falsified
training results, i.e., poisoning attacks. In this paper, we review the
vulnerabilities of FLNs, and particularly give an overview of poisoning attacks
and mainstream countermeasures. Nevertheless, the existing countermeasures can
only provide passive protection and fail to consider the training fees paid for
the contributions of the EDs, resulting in a unnecessarily high training cost.
Hence, we present a smart security enhancement framework for FLNs. In
particular, a verify-before-aggregate (VBA) procedure is developed to identify
and remove the non-benign training results from the EDs. Afterward, deep
reinforcement learning (DRL) is applied to learn the behaving patterns of the
EDs and to actively select the EDs that can provide benign training results and
charge low training fees. Simulation results reveal that the proposed framework
can protect FLNs effectively and efficiently.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Towards Robust and Cost-Efficient Knowledge Unlearning for Large Language Models [25.91643745340183]
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.
This poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods.
We propose two novel techniques for robust and efficient unlearning for LLMs.
arXiv Detail & Related papers (2024-08-13T04:18:32Z) - Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Edge-assisted U-Shaped Split Federated Learning with Privacy-preserving
for Internet of Things [4.68267059122563]
We present an innovative Edge-assisted U-Shaped Split Federated Learning (EUSFL) framework, which harnesses the high-performance capabilities of edge servers.
In this framework, we leverage Federated Learning (FL) to enable data holders to collaboratively train models without sharing their data.
We also propose a novel noise mechanism called LabelDP to ensure that data features and labels can securely resist reconstruction attacks.
arXiv Detail & Related papers (2023-11-08T05:14:41Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - FAT: Federated Adversarial Training [5.287156503763459]
Federated learning (FL) is one of the most important paradigms addressing privacy and data governance issues in machine learning (ML)
We take the first known steps towards federated adversarial training (FAT) combining both methods to reduce the threat of evasion during inference while preserving the data privacy during training.
arXiv Detail & Related papers (2020-12-03T09:47:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.