Attacks on Node Attributes in Graph Neural Networks
- URL: http://arxiv.org/abs/2402.12426v2
- Date: Tue, 5 Mar 2024 16:31:53 GMT
- Title: Attacks on Node Attributes in Graph Neural Networks
- Authors: Ying Xu, Michael Lanier, Anindya Sarkar, Yevgeniy Vorobeychik
- Abstract summary: This research investigates the vulnerability of graph models through the application of feature based adversarial attacks.
Our findings indicate that decision time attacks using Projected Gradient Descent (PGD) are more potent compared to poisoning attacks that employ Mean Node Embeddings and Graph Contrastive Learning strategies.
- Score: 32.40598187698689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are commonly used to model complex networks prevalent in modern social
media and literacy applications. Our research investigates the vulnerability of
these graphs through the application of feature based adversarial attacks,
focusing on both decision time attacks and poisoning attacks. In contrast to
state of the art models like Net Attack and Meta Attack, which target node
attributes and graph structure, our study specifically targets node attributes.
For our analysis, we utilized the text dataset Hellaswag and graph datasets
Cora and CiteSeer, providing a diverse basis for evaluation. Our findings
indicate that decision time attacks using Projected Gradient Descent (PGD) are
more potent compared to poisoning attacks that employ Mean Node Embeddings and
Graph Contrastive Learning strategies. This provides insights for graph data
security, pinpointing where graph-based models are most vulnerable and thereby
informing the development of stronger defense mechanisms against such attacks.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Revisiting Adversarial Attacks on Graph Neural Networks for Graph
Classification [38.339503144719984]
We present a novel and general framework to generate adversarial examples via manipulating graph structure and node features.
Specifically, we make use of Graph Class Mapping and its variant to produce node-level importance corresponding to the graph classification task.
Experiments towards attacking four state-of-the-art graph classification models on six real-world benchmarks verify the flexibility and effectiveness of our framework.
arXiv Detail & Related papers (2022-08-13T13:41:44Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Reinforcement Learning For Data Poisoning on Graph Neural Networks [0.5156484100374058]
Adversarial Machine Learning has emerged as a substantial subfield of Computer Science.
We will study the novel problem of Data Poisoning (training time) attack on Neural Networks for Graph Classification using Reinforcement Learning Agents.
arXiv Detail & Related papers (2021-02-12T22:34:53Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.