Sybil-based Virtual Data Poisoning Attacks in Federated Learning
- URL: http://arxiv.org/abs/2505.09983v1
- Date: Thu, 15 May 2025 05:46:59 GMT
- Title: Sybil-based Virtual Data Poisoning Attacks in Federated Learning
- Authors: Changxun Zhu, Qilong Wu, Lingjuan Lyu, Shibei Xue,
- Abstract summary: Federated learning is vulnerable to poisoning attacks by malicious adversaries.<n>We propose a sybil-based virtual data poisoning attack, where a malicious client generates sybil nodes to amplify the poisoning model's impact.
- Score: 33.21543663872588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is vulnerable to poisoning attacks by malicious adversaries. Existing methods often involve high costs to achieve effective attacks. To address this challenge, we propose a sybil-based virtual data poisoning attack, where a malicious client generates sybil nodes to amplify the poisoning model's impact. To reduce neural network computational complexity, we develop a virtual data generation method based on gradient matching. We also design three schemes for target model acquisition, applicable to online local, online global, and offline scenarios. In simulation, our method outperforms other attack algorithms since our method can obtain a global target model under non-independent uniformly distributed data.
Related papers
- Exploiting Edge Features for Transferable Adversarial Attacks in Distributed Machine Learning [54.26807397329468]
This work explores a previously overlooked vulnerability in distributed deep learning systems.<n>An adversary who intercepts the intermediate features transmitted between them can still pose a serious threat.<n>We propose an exploitation strategy specifically designed for distributed settings.
arXiv Detail & Related papers (2025-07-09T20:09:00Z) - Intelligent Attacks and Defense Methods in Federated Learning-enabled Energy-Efficient Wireless Networks [16.816730878868373]
Federated learning (FL) is a promising technique for learning-based functions in wireless networks.<n>FL may increase the risk of exposure to malicious attacks where attacks on a local model may spread to other models.<n>It is critical to evaluate the effect of malicious attacks and develop advanced defense techniques for FL-enabled wireless networks.
arXiv Detail & Related papers (2025-04-25T17:40:35Z) - Adversarial Training for Defense Against Label Poisoning Attacks [53.893792844055106]
Label poisoning attacks pose significant risks to machine learning models.<n>We propose a novel adversarial training defense strategy based on support vector machines (SVMs) to counter these threats.<n>Our approach accommodates various model architectures and employs a projected gradient descent algorithm with kernel SVMs for adversarial training.
arXiv Detail & Related papers (2025-02-24T13:03:19Z) - Mitigating Malicious Attacks in Federated Learning via Confidence-aware Defense [3.685395311534351]
Federated Learning (FL) is a distributed machine learning diagram that enables multiple clients to collaboratively train a global model without sharing their private local data.
FL systems are vulnerable to attacks that are happening in malicious clients through data poisoning and model poisoning.
Existing defense methods typically focus on mitigating specific types of poisoning and are often ineffective against unseen types of attack.
arXiv Detail & Related papers (2024-08-05T20:27:45Z) - Interactive Trimming against Evasive Online Data Manipulation Attacks: A Game-Theoretic Approach [10.822843258077997]
Malicious data poisoning attacks can disrupt machine learning processes and lead to severe consequences.
To mitigate these attacks, distance-based defenses, such as trimming, have been proposed.
We present an interactive game-theoretical model to defend online data manipulation attacks using the trimming strategy.
arXiv Detail & Related papers (2024-03-15T13:59:05Z) - Advancing DDoS Attack Detection: A Synergistic Approach Using Deep
Residual Neural Networks and Synthetic Oversampling [2.988269372716689]
We introduce an enhanced approach for DDoS attack detection by leveraging the capabilities of Deep Residual Neural Networks (ResNets)
We balance the representation of benign and malicious data points, enabling the model to better discern intricate patterns indicative of an attack.
Experimental results on a real-world dataset demonstrate that our approach achieves an accuracy of 99.98%, significantly outperforming traditional methods.
arXiv Detail & Related papers (2024-01-06T03:03:52Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Indiscriminate Data Poisoning Attacks on Neural Networks [28.09519873656809]
Data poisoning attacks aim to influence a model by injecting "poisoned" data into the training process.
We take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games.
We present efficient implementations that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of poisoned points.
arXiv Detail & Related papers (2022-04-19T18:57:26Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.