Semi-supervised Semantics-guided Adversarial Training for Trajectory
Prediction
- URL: http://arxiv.org/abs/2205.14230v2
- Date: Tue, 21 Mar 2023 01:55:06 GMT
- Title: Semi-supervised Semantics-guided Adversarial Training for Trajectory
Prediction
- Authors: Ruochen Jiao, Xiangguo Liu, Takami Sato, Qi Alfred Chen and Qi Zhu
- Abstract summary: Adversarial attacks on trajectory prediction may mislead the prediction of future trajectories and induce unsafe planning.
We present a novel adversarial training method for trajectory prediction.
Our method can effectively mitigate the impact of adversarial attacks by up to 73% and outperform other popular defense methods.
- Score: 15.707419899141698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting the trajectories of surrounding objects is a critical task for
self-driving vehicles and many other autonomous systems. Recent works
demonstrate that adversarial attacks on trajectory prediction, where small
crafted perturbations are introduced to history trajectories, may significantly
mislead the prediction of future trajectories and induce unsafe planning.
However, few works have addressed enhancing the robustness of this important
safety-critical task.In this paper, we present a novel adversarial training
method for trajectory prediction. Compared with typical adversarial training on
image tasks, our work is challenged by more random input with rich context and
a lack of class labels. To address these challenges, we propose a method based
on a semi-supervised adversarial autoencoder, which models disentangled
semantic features with domain knowledge and provides additional latent labels
for the adversarial training. Extensive experiments with different types of
attacks demonstrate that our Semisupervised Semantics-guided Adversarial
Training (SSAT) method can effectively mitigate the impact of adversarial
attacks by up to 73% and outperform other popular defense methods. In addition,
experiments show that our method can significantly improve the system's robust
generalization to unseen patterns of attacks. We believe that such
semantics-guided architecture and advancement on robust generalization is an
important step for developing robust prediction models and enabling safe
decision-making.
Related papers
- Enhancing Adversarial Robustness via Uncertainty-Aware Distributional Adversarial Training [43.766504246864045]
We propose a novel uncertainty-aware distributional adversarial training method.
Our approach achieves state-of-the-art adversarial robustness and maintains natural performance.
arXiv Detail & Related papers (2024-11-05T07:26:24Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Traj-MAE: Masked Autoencoders for Trajectory Prediction [69.7885837428344]
Trajectory prediction has been a crucial task in building a reliable autonomous driving system by anticipating possible dangers.
We propose an efficient masked autoencoder for trajectory prediction (Traj-MAE) that better represents the complicated behaviors of agents in the driving environment.
Our experimental results in both multi-agent and single-agent settings demonstrate that Traj-MAE achieves competitive results with state-of-the-art methods.
arXiv Detail & Related papers (2023-03-12T16:23:27Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - Resisting Deep Learning Models Against Adversarial Attack
Transferability via Feature Randomization [17.756085566366167]
We propose a feature randomization-based approach that resists eight adversarial attacks targeting deep learning models.
Our methodology can secure the target network and resists adversarial attack transferability by over 60%.
arXiv Detail & Related papers (2022-09-11T20:14:12Z) - Self-Ensemble Adversarial Training for Improved Robustness [14.244311026737666]
Adversarial training is the strongest strategy against various adversarial attacks among all sorts of defense methods.
Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space.
We devise a simple but powerful emphSelf-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models.
arXiv Detail & Related papers (2022-03-18T01:12:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.