Spatially Focused Attack against Spatiotemporal Graph Neural Networks
- URL: http://arxiv.org/abs/2109.04608v1
- Date: Fri, 10 Sep 2021 01:31:53 GMT
- Title: Spatially Focused Attack against Spatiotemporal Graph Neural Networks
- Authors: Fuqiang Liu, Luis Miranda-Moreno, Lijun Sun
- Abstract summary: Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
- Score: 8.665638585791235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatiotemporal forecasting plays an essential role in various applications in
intelligent transportation systems (ITS), such as route planning, navigation,
and traffic control and management. Deep Spatiotemporal graph neural networks
(GNNs), which capture both spatial and temporal patterns, have achieved great
success in traffic forecasting applications. Understanding how GNNs-based
forecasting work and the vulnerability and robustness of these models becomes
critical to real-world applications. For example, if spatiotemporal GNNs are
vulnerable in real-world traffic prediction applications, a hacker can easily
manipulate the results and cause serious traffic congestion and even a
city-scale breakdown. However, despite that recent studies have demonstrated
that deep neural networks (DNNs) are vulnerable to carefully designed
perturbations in multiple domains like objection classification and graph
representation, current adversarial works cannot be directly applied to
spatiotemporal forecasting due to the causal nature and spatiotemporal
mechanisms in forecasting models. To fill this gap, in this paper we design
Spatially Focused Attack (SFA) to break spatiotemporal GNNs by attacking a
single vertex. To achieve this, we first propose the inverse estimation to
address the causality issue; then, we apply genetic algorithms with a universal
attack method as the evaluation function to locate the weakest vertex; finally,
perturbations are generated by solving an inverse estimation-based optimization
problem. We conduct experiments on real-world traffic data and our results show
that perturbations in one vertex designed by SA can be diffused into a large
part of the graph.
Related papers
- TG-PhyNN: An Enhanced Physically-Aware Graph Neural Network framework for forecasting Spatio-Temporal Data [3.268628956733623]
This work presents TG-PhyNN, a novel Temporal Graph Physics-Informed Neural Network framework.
TG-PhyNN leverages the power of GNNs for graph-based modeling while simultaneously incorporating physical constraints as a guiding principle during training.
Our findings demonstrate that TG-PhyNN significantly outperforms traditional forecasting models.
TG-PhyNN effectively exploits to offer more reliable and accurate forecasts in various domains where physical processes govern the dynamics of data.
arXiv Detail & Related papers (2024-08-29T09:41:17Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - Uncovering the Missing Pattern: Unified Framework Towards Trajectory
Imputation and Prediction [60.60223171143206]
Trajectory prediction is a crucial undertaking in understanding entity movement or human behavior from observed sequences.
Current methods often assume that the observed sequences are complete while ignoring the potential for missing values.
This paper presents a unified framework, the Graph-based Conditional Variational Recurrent Neural Network (GC-VRNN), which can perform trajectory imputation and prediction simultaneously.
arXiv Detail & Related papers (2023-03-28T14:27:27Z) - Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting
Models [9.885060319609831]
Existing methods assume a reliable and unbiased forecasting environment, which is not always available in the wild.
We propose a practical adversarial attack framework, instead of simultaneously attacking all data sources.
We theoretically demonstrate the worst performance bound of adversarial traffic forecasting attacks.
arXiv Detail & Related papers (2022-10-05T02:25:10Z) - STGIN: A Spatial Temporal Graph-Informer Network for Long Sequence
Traffic Speed Forecasting [8.596556653895028]
This study proposes a new spatial-temporal neural network architecture to handle the long-term traffic parameters forecasting issue.
The attention mechanism potentially guarantees long-term prediction performance without significant information loss from distant inputs.
arXiv Detail & Related papers (2022-10-01T05:58:22Z) - Multi-head Temporal Attention-Augmented Bilinear Network for Financial
time series prediction [77.57991021445959]
We propose a neural layer based on the ideas of temporal attention and multi-head attention to extend the capability of the underlying neural network.
The effectiveness of our approach is validated using large-scale limit-order book market data.
arXiv Detail & Related papers (2022-01-14T14:02:19Z) - CAP: Co-Adversarial Perturbation on Weights and Features for Improving
Generalization of Graph Neural Networks [59.692017490560275]
Adversarial training has been widely demonstrated to improve model's robustness against adversarial attacks.
It remains unclear how the adversarial training could improve the generalization abilities of GNNs in the graph analytics problem.
We construct the co-adversarial perturbation (CAP) optimization problem in terms of weights and features, and design the alternating adversarial perturbation algorithm to flatten the weight and feature loss landscapes alternately.
arXiv Detail & Related papers (2021-10-28T02:28:13Z) - Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models [5.067859671505088]
Recent studies reveal the vulnerability of graphal networks (CN) under adversarial attacks.
This paper proposes a new task -- diffusion attack, to study the robustness of GCN-based traffic prediction models.
The proposed algorithm demonstrates high efficiency in the adversarial attack tasks under various scenarios.
arXiv Detail & Related papers (2021-04-19T14:57:25Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.