Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach
- URL: http://arxiv.org/abs/2311.18498v1
- Date: Thu, 30 Nov 2023 12:19:10 GMT
- Title: Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach
- Authors: Kai Li, Jingjing Zheng, Xin Yuan, Wei Ni, Ozgur B. Akan, H. Vincent
Poor
- Abstract summary: This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
- Score: 65.2993866461477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel, data-agnostic, model poisoning attack on
Federated Learning (FL), by designing a new adversarial graph autoencoder
(GAE)-based framework. The attack requires no knowledge of FL training data and
achieves both effectiveness and undetectability. By listening to the benign
local models and the global model, the attacker extracts the graph structural
correlations among the benign local models and the training data features
substantiating the models. The attacker then adversarially regenerates the
graph structural correlations while maximizing the FL training loss, and
subsequently generates malicious local models using the adversarial graph
structure and the training data features of the benign ones. A new algorithm is
designed to iteratively train the malicious local models using GAE and
sub-gradient descent. The convergence of FL under attack is rigorously proved,
with a considerably large optimality gap. Experiments show that the FL accuracy
drops gradually under the proposed attack and existing defense mechanisms fail
to detect it. The attack can give rise to an infection across all benign
devices, making it a serious threat to FL.
Related papers
- Leverage Variational Graph Representation For Model Poisoning on Federated Learning [34.69357741350565]
New MP attack extends an adversarial variational graph autoencoder (VGAE) to create malicious local models.
Experiments demonstrate a gradual drop in FL accuracy under the proposed VGAE-MP attack and the ineffectiveness of existing defense mechanisms in detecting the attack.
arXiv Detail & Related papers (2024-04-23T13:43:56Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - DABS: Data-Agnostic Backdoor attack at the Server in Federated Learning [14.312593000209693]
Federated learning (FL) attempts to train a global model by aggregating local models from distributed devices under the coordination of a central server.
The existence of a large number of heterogeneous devices makes FL vulnerable to various attacks, especially the stealthy backdoor attack.
We propose a new attack model for FL, namely Data-Agnostic Backdoor attack at the Server (DABS), where the server directly modifies the global model to backdoor an FL system.
arXiv Detail & Related papers (2023-05-02T09:04:34Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
Sparsification [24.053704318868043]
In model poisoning attacks, the attacker reduces the model's performance on targeted sub-tasks by uploading "poisoned" updates.
We introduce algoname, a novel defense that uses global top-k update sparsification and device-level clipping gradient to mitigate model poisoning attacks.
arXiv Detail & Related papers (2021-12-12T16:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.