Targeted Adversarial Attacks against Neural Network Trajectory
Predictors
- URL: http://arxiv.org/abs/2212.04138v1
- Date: Thu, 8 Dec 2022 08:34:28 GMT
- Title: Targeted Adversarial Attacks against Neural Network Trajectory
Predictors
- Authors: Kaiyuan Tan, Jun Wang, Yiannis Kantaros
- Abstract summary: Trajectory prediction is an integral component of modern autonomous systems.
Deep neural network (DNN) models are often employed for trajectory forecasting tasks.
We propose a targeted adversarial attack against DNN models for trajectory forecasting tasks.
- Score: 14.834932672948698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory prediction is an integral component of modern autonomous systems
as it allows for envisioning future intentions of nearby moving agents. Due to
the lack of other agents' dynamics and control policies, deep neural network
(DNN) models are often employed for trajectory forecasting tasks. Although
there exists an extensive literature on improving the accuracy of these models,
there is a very limited number of works studying their robustness against
adversarially crafted input trajectories. To bridge this gap, in this paper, we
propose a targeted adversarial attack against DNN models for trajectory
forecasting tasks. We call the proposed attack TA4TP for Targeted adversarial
Attack for Trajectory Prediction. Our approach generates adversarial input
trajectories that are capable of fooling DNN models into predicting
user-specified target/desired trajectories. Our attack relies on solving a
nonlinear constrained optimization problem where the objective function
captures the deviation of the predicted trajectory from a target one while the
constraints model physical requirements that the adversarial input should
satisfy. The latter ensures that the inputs look natural and they are safe to
execute (e.g., they are close to nominal inputs and away from obstacles). We
demonstrate the effectiveness of TA4TP on two state-of-the-art DNN models and
two datasets. To the best of our knowledge, we propose the first targeted
adversarial attack against DNN models used for trajectory forecasting.
Related papers
- Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Spear and Shield: Adversarial Attacks and Defense Methods for
Model-Based Link Prediction on Continuous-Time Dynamic Graphs [40.01361505644007]
We propose T-SPEAR, a simple and effective adversarial attack method for link prediction on continuous-time dynamic graphs.
We show that T-SPEAR significantly degrades the victim model's performance on link prediction tasks.
Our attacks are transferable to other TGNNs, which differ from the victim model assumed by the attacker.
arXiv Detail & Related papers (2023-08-21T15:09:51Z) - Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
Adversarial Training [13.998123723601651]
Machine learning-based forecasting models are commonly used in Intelligent Transportation Systems (ITS) to predict traffic patterns.
Most of the existing models are susceptible to adversarial attacks, which can lead to inaccurate predictions and negative consequences such as congestion and delays.
We propose a framework for incorporating adversarial training into traffic forecasting tasks.
arXiv Detail & Related papers (2023-06-25T04:53:29Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Black-box Adversarial Attacks on Network-wide Multi-step Traffic State
Prediction Models [4.353029347463806]
We propose an adversarial attack framework by treating the prediction model as a black-box.
The adversary can oracle the prediction model with any input and obtain corresponding output.
To test the attack effectiveness, two state of the art, graph neural network-based models (GCGRNN and DCRNN) are examined.
arXiv Detail & Related papers (2021-10-17T03:45:35Z) - TNT: Target-driveN Trajectory Prediction [76.21200047185494]
We develop a target-driven trajectory prediction framework for moving agents.
We benchmark it on trajectory prediction of vehicles and pedestrians.
We outperform state-of-the-art on Argoverse Forecasting, INTERACTION, Stanford Drone and an in-house Pedestrian-at-Intersection dataset.
arXiv Detail & Related papers (2020-08-19T06:52:46Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.