Explainability-Based Adversarial Attack on Graphs Through Edge
Perturbation
- URL: http://arxiv.org/abs/2312.17301v1
- Date: Thu, 28 Dec 2023 17:41:30 GMT
- Title: Explainability-Based Adversarial Attack on Graphs Through Edge
Perturbation
- Authors: Dibaloke Chanda, Saba Heidari Gheshlaghi and Nasim Yahya Soltani
- Abstract summary: We investigate the impact of test time adversarial attacks through edge perturbations which involve both edge insertions and deletions.
A novel explainability-based method is proposed to identify important nodes in the graph and perform edge perturbation between these nodes.
Results suggest that introducing edges between nodes of different classes has higher impact as compared to removing edges among nodes within the same class.
- Score: 1.6385815610837167
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the success of graph neural networks (GNNs) in various domains, they
exhibit susceptibility to adversarial attacks. Understanding these
vulnerabilities is crucial for developing robust and secure applications. In
this paper, we investigate the impact of test time adversarial attacks through
edge perturbations which involve both edge insertions and deletions. A novel
explainability-based method is proposed to identify important nodes in the
graph and perform edge perturbation between these nodes. The proposed method is
tested for node classification with three different architectures and datasets.
The results suggest that introducing edges between nodes of different classes
has higher impact as compared to removing edges among nodes within the same
class.
Related papers
- Robust Subgraph Learning by Monitoring Early Training Representations [5.524804393257921]
Graph neural networks (GNNs) have attracted significant attention for their outstanding performance in graph learning and node classification tasks.
Their vulnerability to adversarial attacks, particularly through susceptible nodes, poses a challenge in decision-making.
We introduce the novel technique SHERD (Subgraph Learning Hale through Early Training Representation Distances) to address both performance and adversarial robustness in graph input.
arXiv Detail & Related papers (2024-03-14T22:25:37Z) - Revisiting Edge Perturbation for Graph Neural Network in Graph Data
Augmentation and Attack [58.440711902319855]
Edge perturbation is a method to modify graph structures.
It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs)
We propose a unified formulation and establish a clear boundary between two categories of edge perturbation methods.
arXiv Detail & Related papers (2024-03-10T15:50:04Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - An Adversarial Robustness Perspective on the Topology of Neural Networks [12.416690940269772]
We study the impact of neural networks (NNs) topology on adversarial robustness.
We find that graphs from clean inputs are more centralized around highway edges, whereas those from adversaries are more diffuse.
arXiv Detail & Related papers (2022-11-04T18:00:53Z) - A Systematic Evaluation of Node Embedding Robustness [77.29026280120277]
We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
arXiv Detail & Related papers (2022-09-16T17:20:23Z) - What Does the Gradient Tell When Attacking the Graph Structure [44.44204591087092]
We present a theoretical demonstration revealing that attackers tend to increase inter-class edges due to the message passing mechanism of GNNs.
By connecting dissimilar nodes, attackers can more effectively corrupt node features, making such attacks more advantageous.
We propose an innovative attack loss that balances attack effectiveness and imperceptibility, sacrificing some attack effectiveness to attain greater imperceptibility.
arXiv Detail & Related papers (2022-08-26T15:45:20Z) - Unveiling Anomalous Edges and Nominal Connectivity of Attributed
Networks [53.56901624204265]
The present work deals with uncovering anomalous edges in attributed graphs using two distinct formulations with complementary strengths.
The first relies on decomposing the graph data matrix into low rank plus sparse components to improve markedly performance.
The second broadens the scope of the first by performing robust recovery of the unperturbed graph, which enhances the anomaly identification performance.
arXiv Detail & Related papers (2021-04-17T20:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.