AdvDO: Realistic Adversarial Attacks for Trajectory Prediction
- URL: http://arxiv.org/abs/2209.08744v1
- Date: Mon, 19 Sep 2022 03:34:59 GMT
- Title: AdvDO: Realistic Adversarial Attacks for Trajectory Prediction
- Authors: Yulong Cao, Chaowei Xiao, Anima Anandkumar, Danfei Xu, Marco Pavone
- Abstract summary: Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
- Score: 87.96767885419423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory prediction is essential for autonomous vehicles (AVs) to plan
correct and safe driving behaviors. While many prior works aim to achieve
higher prediction accuracy, few study the adversarial robustness of their
methods. To bridge this gap, we propose to study the adversarial robustness of
data-driven trajectory prediction systems. We devise an optimization-based
adversarial attack framework that leverages a carefully-designed differentiable
dynamic model to generate realistic adversarial trajectories. Empirically, we
benchmark the adversarial robustness of state-of-the-art prediction models and
show that our attack increases the prediction error for both general metrics
and planning-aware metrics by more than 50% and 37%. We also show that our
attack can lead an AV to drive off road or collide into other vehicles in
simulation. Finally, we demonstrate how to mitigate the adversarial attacks
using an adversarial training scheme.
Related papers
- Annealed Winner-Takes-All for Motion Forecasting [48.200282332176094]
We show how an aWTA loss can be integrated with state-of-the-art motion forecasting models to enhance their performance.
Our approach can be easily incorporated into any trajectory prediction model normally trained using WTA.
arXiv Detail & Related papers (2024-09-17T13:26:17Z) - A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving [23.08193005790747]
Existing attacks compromise the prediction model of a victim AV.
We propose a novel two-stage attack framework to realize the single-point attack.
Our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV.
arXiv Detail & Related papers (2024-06-17T16:26:00Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving [18.72382517467458]
We propose a novel adversarial backdoor attack against trajectory prediction models.
Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach.
We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models.
arXiv Detail & Related papers (2023-06-27T19:15:06Z) - Robust Spatiotemporal Traffic Forecasting with Reinforced Dynamic
Adversarial Training [13.998123723601651]
Machine learning-based forecasting models are commonly used in Intelligent Transportation Systems (ITS) to predict traffic patterns.
Most of the existing models are susceptible to adversarial attacks, which can lead to inaccurate predictions and negative consequences such as congestion and delays.
We propose a framework for incorporating adversarial training into traffic forecasting tasks.
arXiv Detail & Related papers (2023-06-25T04:53:29Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Semi-supervised Semantics-guided Adversarial Training for Trajectory
Prediction [15.707419899141698]
Adversarial attacks on trajectory prediction may mislead the prediction of future trajectories and induce unsafe planning.
We present a novel adversarial training method for trajectory prediction.
Our method can effectively mitigate the impact of adversarial attacks by up to 73% and outperform other popular defense methods.
arXiv Detail & Related papers (2022-05-27T20:50:36Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - On Adversarial Robustness of Trajectory Prediction for Autonomous
Vehicles [21.56253104577053]
Trajectory prediction is a critical component for autonomous vehicles to perform safe planning and navigation.
We propose a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error.
Case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and make unsafe driving decisions.
arXiv Detail & Related papers (2022-01-13T16:33:04Z) - The Importance of Prior Knowledge in Precise Multimodal Prediction [71.74884391209955]
Roads have well defined geometries, topologies, and traffic rules.
In this paper we propose to incorporate structured priors as a loss function.
We demonstrate the effectiveness of our approach on real-world self-driving datasets.
arXiv Detail & Related papers (2020-06-04T03:56:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.