Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach
- URL: http://arxiv.org/abs/2311.18498v1
- Date: Thu, 30 Nov 2023 12:19:10 GMT
- Title: Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach
- Authors: Kai Li, Jingjing Zheng, Xin Yuan, Wei Ni, Ozgur B. Akan, H. Vincent
Poor
- Abstract summary: This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
- Score: 65.2993866461477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel, data-agnostic, model poisoning attack on
Federated Learning (FL), by designing a new adversarial graph autoencoder
(GAE)-based framework. The attack requires no knowledge of FL training data and
achieves both effectiveness and undetectability. By listening to the benign
local models and the global model, the attacker extracts the graph structural
correlations among the benign local models and the training data features
substantiating the models. The attacker then adversarially regenerates the
graph structural correlations while maximizing the FL training loss, and
subsequently generates malicious local models using the adversarial graph
structure and the training data features of the benign ones. A new algorithm is
designed to iteratively train the malicious local models using GAE and
sub-gradient descent. The convergence of FL under attack is rigorously proved,
with a considerably large optimality gap. Experiments show that the FL accuracy
drops gradually under the proposed attack and existing defense mechanisms fail
to detect it. The attack can give rise to an infection across all benign
devices, making it a serious threat to FL.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Multi-Model based Federated Learning Against Model Poisoning Attack: A Deep Learning Based Model Selection for MEC Systems [11.564289367348334]
Federated Learning (FL) enables training of a global model from distributed data, while preserving data privacy.
This paper proposes a multi-model based FL as a proactive mechanism to enhance the opportunity of model poisoning attack mitigation.
For a DDoS attack detection scenario, results illustrate a competitive accuracy gain under poisoning attack with the scenario that the system is without attack, and also a potential of recognition time improvement.
arXiv Detail & Related papers (2024-09-12T17:36:26Z) - Leverage Variational Graph Representation For Model Poisoning on Federated Learning [34.69357741350565]
New MP attack extends an adversarial variational graph autoencoder (VGAE) to create malicious local models.
Experiments demonstrate a gradual drop in FL accuracy under the proposed VGAE-MP attack and the ineffectiveness of existing defense mechanisms in detecting the attack.
arXiv Detail & Related papers (2024-04-23T13:43:56Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.