Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
Adversarial Training
- URL: http://arxiv.org/abs/2306.14126v1
- Date: Sun, 25 Jun 2023 04:53:29 GMT
- Title: Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
Adversarial Training
- Authors: Fan Liu and Weijia Zhang and Hao Liu
- Abstract summary: Machine learning-based forecasting models are commonly used in Intelligent Transportation Systems (ITS) to predict traffic patterns.
Most of the existing models are susceptible to adversarial attacks, which can lead to inaccurate predictions and negative consequences such as congestion and delays.
We propose a framework for incorporating adversarial training into traffic forecasting tasks.
- Score: 13.998123723601651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning-based forecasting models are commonly used in Intelligent
Transportation Systems (ITS) to predict traffic patterns and provide city-wide
services. However, most of the existing models are susceptible to adversarial
attacks, which can lead to inaccurate predictions and negative consequences
such as congestion and delays. Therefore, improving the adversarial robustness
of these models is crucial for ITS. In this paper, we propose a novel framework
for incorporating adversarial training into spatiotemporal traffic forecasting
tasks. We demonstrate that traditional adversarial training methods designated
for static domains cannot be directly applied to traffic forecasting tasks, as
they fail to effectively defend against dynamic adversarial attacks. Then, we
propose a reinforcement learning-based method to learn the optimal node
selection strategy for adversarial examples, which simultaneously strengthens
the dynamic attack defense capability and reduces the model overfitting.
Additionally, we introduce a self-knowledge distillation regularization module
to overcome the "forgetting issue" caused by continuously changing adversarial
nodes during training. We evaluate our approach on two real-world traffic
datasets and demonstrate its superiority over other baselines. Our method
effectively enhances the adversarial robustness of spatiotemporal traffic
forecasting models. The source code for our framework is available at
https://github.com/usail-hkust/RDAT.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs [40.01361505644007]
We propose T-SPEAR, a simple and effective adversarial attack method for link prediction on continuous-time dynamic graphs.
We show that T-SPEAR significantly degrades the victim model's performance on link prediction tasks.
Our attacks are transferable to other TGNNs, which differ from the victim model assumed by the attacker.
arXiv Detail & Related papers (2023-08-21T15:09:51Z) - Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting
Models [9.885060319609831]
Existing methods assume a reliable and unbiased forecasting environment, which is not always available in the wild.
We propose a practical adversarial attack framework, instead of simultaneously attacking all data sources.
We theoretically demonstrate the worst performance bound of adversarial traffic forecasting attacks.
arXiv Detail & Related papers (2022-10-05T02:25:10Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Self-Ensemble Adversarial Training for Improved Robustness [14.244311026737666]
Adversarial training is the strongest strategy against various adversarial attacks among all sorts of defense methods.
Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space.
We devise a simple but powerful emphSelf-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models.
arXiv Detail & Related papers (2022-03-18T01:12:18Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.