Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving
- URL: http://arxiv.org/abs/2306.15755v2
- Date: Wed, 22 Nov 2023 16:16:43 GMT
- Title: Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving
- Authors: Mozhgan Pourkeshavarz, Mohammad Sabokrou, Amir Rasouli
- Abstract summary: We propose a novel adversarial backdoor attack against trajectory prediction models.
Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach.
We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models.
- Score: 18.72382517467458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving, behavior prediction is fundamental for safe motion
planning, hence the security and robustness of prediction models against
adversarial attacks are of paramount importance. We propose a novel adversarial
backdoor attack against trajectory prediction models as a means of studying
their potential vulnerabilities. Our attack affects the victim at training time
via naturalistic, hence stealthy, poisoned samples crafted using a novel
two-step approach. First, the triggers are crafted by perturbing the trajectory
of attacking vehicle and then disguised by transforming the scene using a
bi-level optimization technique. The proposed attack does not depend on a
particular model architecture and operates in a black-box manner, thus can be
effective without any knowledge of the victim model. We conduct extensive
empirical studies using state-of-the-art prediction models on two benchmark
datasets using metrics customized for trajectory prediction. We show that the
proposed attack is highly effective, as it can significantly hinder the
performance of prediction models, unnoticeable by the victims, and efficient as
it forces the victim to generate malicious behavior even under constrained
conditions. Via ablative studies, we analyze the impact of different attack
design choices followed by an evaluation of existing defence mechanisms against
the proposed attack.
Related papers
- A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving [23.08193005790747]
Existing attacks compromise the prediction model of a victim AV.
We propose a novel two-stage attack framework to realize the single-point attack.
Our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV.
arXiv Detail & Related papers (2024-06-17T16:26:00Z) - Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security [1.949927790632678]
In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer.
The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent position and velocity states.
We additionally demonstrate that, despite dominant sensitivity on state history perturbations, an undetectable image map perturbation can induce large prediction error increases in both models.
arXiv Detail & Related papers (2024-01-18T18:47:29Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Targeted Attacks on Timeseries Forecasting [0.6719751155411076]
We propose a novel formulation of Directional, Amplitudinal, and Temporal targeted adversarial attacks on time series forecasting models.
These targeted attacks create a specific impact on the amplitude and direction of the output prediction.
Our experimental results show how targeted attacks on time series models are viable and are more powerful in terms of statistical similarity.
arXiv Detail & Related papers (2023-01-27T06:09:42Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Robust Multivariate Time-Series Forecasting: Adversarial Attacks and
Defense Mechanisms [17.75675910162935]
A new attack pattern negatively impacts the forecasting of a target time series.
We develop two defense strategies to mitigate the impact of such attack.
Experiments on real-world datasets confirm that our attack schemes are powerful.
arXiv Detail & Related papers (2022-07-19T22:00:41Z) - Learning to Learn Transferable Attack [77.67399621530052]
Transfer adversarial attack is a non-trivial black-box adversarial attack that aims to craft adversarial perturbations on the surrogate model and then apply such perturbations to the victim model.
We propose a Learning to Learn Transferable Attack (LLTA) method, which makes the adversarial perturbations more generalized via learning from both data and model augmentation.
Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-12-10T07:24:21Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.