Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models
- URL: http://arxiv.org/abs/2104.09369v1
- Date: Mon, 19 Apr 2021 14:57:25 GMT
- Title: Adversarial Diffusion Attacks on Graph-based Traffic Prediction Models
- Authors: Lyuyi Zhu, Kairui Feng, Ziyuan Pu, Wei Ma
- Abstract summary: Recent studies reveal the vulnerability of graphal networks (CN) under adversarial attacks.
This paper proposes a new task -- diffusion attack, to study the robustness of GCN-based traffic prediction models.
The proposed algorithm demonstrates high efficiency in the adversarial attack tasks under various scenarios.
- Score: 5.067859671505088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time traffic prediction models play a pivotal role in smart mobility
systems and have been widely used in route guidance, emerging mobility
services, and advanced traffic management systems. With the availability of
massive traffic data, neural network-based deep learning methods, especially
the graph convolutional networks (GCN) have demonstrated outstanding
performance in mining spatio-temporal information and achieving high prediction
accuracy. Recent studies reveal the vulnerability of GCN under adversarial
attacks, while there is a lack of studies to understand the vulnerability
issues of the GCN-based traffic prediction models. Given this, this paper
proposes a new task -- diffusion attack, to study the robustness of GCN-based
traffic prediction models. The diffusion attack aims to select and attack a
small set of nodes to degrade the performance of the entire prediction model.
To conduct the diffusion attack, we propose a novel attack algorithm, which
consists of two major components: 1) approximating the gradient of the
black-box prediction model with Simultaneous Perturbation Stochastic
Approximation (SPSA); 2) adapting the knapsack greedy algorithm to select the
attack nodes. The proposed algorithm is examined with three GCN-based traffic
prediction models: St-Gcn, T-Gcn, and A3t-Gcn on two cities. The proposed
algorithm demonstrates high efficiency in the adversarial attack tasks under
various scenarios, and it can still generate adversarial samples under the drop
regularization such as DropOut, DropNode, and DropEdge. The research outcomes
could help to improve the robustness of the GCN-based traffic prediction models
and better protect the smart mobility systems. Our code is available at
https://github.com/LYZ98/Adversarial-Diffusion-Attacks-on-Graph-based-Traffic-Prediction-Models
Related papers
- Pattern-Matching Dynamic Memory Network for Dual-Mode Traffic Prediction [11.99118889081249]
We propose a Pattern-Matching Dynamic Memory Network (PM-DMNet) for traffic prediction.
PM-DMNet employs a novel dynamic memory network to capture traffic pattern features with only O(N) complexity.
The proposed model is superior to existing benchmarks.
arXiv Detail & Related papers (2024-08-12T15:12:30Z) - STG4Traffic: A Survey and Benchmark of Spatial-Temporal Graph Neural Networks for Traffic Prediction [9.467593700532401]
This paper provides a systematic review of graph learning strategies and commonly used graph convolution algorithms.
We then conduct a comprehensive analysis of the strengths and weaknesses of recently proposed spatial-temporal graph network models.
We build a study called STG4Traffic using the deep learning framework PyTorch to establish a standardized and scalable benchmark on two types of traffic datasets.
arXiv Detail & Related papers (2023-07-02T06:56:52Z) - Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting
Models [9.885060319609831]
Existing methods assume a reliable and unbiased forecasting environment, which is not always available in the wild.
We propose a practical adversarial attack framework, instead of simultaneously attacking all data sources.
We theoretically demonstrate the worst performance bound of adversarial traffic forecasting attacks.
arXiv Detail & Related papers (2022-10-05T02:25:10Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Black-box Adversarial Attacks on Network-wide Multi-step Traffic State
Prediction Models [4.353029347463806]
We propose an adversarial attack framework by treating the prediction model as a black-box.
The adversary can oracle the prediction model with any input and obtain corresponding output.
To test the attack effectiveness, two state of the art, graph neural network-based models (GCGRNN and DCRNN) are examined.
arXiv Detail & Related papers (2021-10-17T03:45:35Z) - Spatially Focused Attack against Spatiotemporal Graph Neural Networks [8.665638585791235]
Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
arXiv Detail & Related papers (2021-09-10T01:31:53Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.