Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks
- URL: http://arxiv.org/abs/2310.15656v1
- Date: Tue, 24 Oct 2023 09:10:45 GMT
- Title: Momentum Gradient-based Untargeted Attack on Hypergraph Neural Networks
- Authors: Yang Chen, Stjepan Picek, Zhonglin Ye, Zhaoyang Wang and Haixing Zhao
- Abstract summary: Hypergraph Neural Networks (HGNNs) have been successfully applied in various hypergraph-related tasks.
Recent works have shown that deep learning models are vulnerable to adversarial attacks.
We design a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses on modifying node features.
- Score: 17.723282166737867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hypergraph Neural Networks (HGNNs) have been successfully applied in various
hypergraph-related tasks due to their excellent higher-order representation
capabilities. Recent works have shown that deep learning models are vulnerable
to adversarial attacks. Most studies on graph adversarial attacks have focused
on Graph Neural Networks (GNNs), and the study of adversarial attacks on HGNNs
remains largely unexplored. In this paper, we try to reduce this gap. We design
a new HGNNs attack model for the untargeted attack, namely MGHGA, which focuses
on modifying node features. We consider the process of HGNNs training and use a
surrogate model to implement the attack before hypergraph modeling.
Specifically, MGHGA consists of two parts: feature selection and feature
modification. We use a momentum gradient mechanism to choose the attack node
features in the feature selection module. In the feature modification module,
we use two feature generation approaches (direct modification and sign
gradient) to enable MGHGA to be employed on discrete and continuous datasets.
We conduct extensive experiments on five benchmark datasets to validate the
attack performance of MGHGA in the node and the visual object classification
tasks. The results show that MGHGA improves performance by an average of 2%
compared to the than the baselines.
Related papers
- HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural
Networks [6.569169627119353]
HomoGMI and HeteGMI are gradient-descent-based optimization methods that aim to maximize the cross-entropy loss on the target GNN.
HeteGMI is the first attempt to perform model inversion attacks on HeteGNNs.
arXiv Detail & Related papers (2023-10-15T11:16:14Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Adversarial Attack on Large Scale Graph [58.741365277995044]
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness.
Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance.
We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows.
We present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data.
arXiv Detail & Related papers (2020-09-08T02:17:55Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.