Are socially-aware trajectory prediction models really socially-aware?
- URL: http://arxiv.org/abs/2108.10879v1
- Date: Tue, 24 Aug 2021 17:59:09 GMT
- Title: Are socially-aware trajectory prediction models really socially-aware?
- Authors: Saeed Saadatnejad, Mohammadhossein Bahari, Pedram Khorsandi, Mohammad
Saneian, Seyed-Mohsen Moosavi-Dezfooli, Alexandre Alahi
- Abstract summary: We introduce a socially-attended attack to assess the social understanding of prediction models.
An attack is a small yet carefully-crafted perturbations to fail predictors.
We show that our attack can be employed to increase the social understanding of state-of-the-art models.
- Score: 75.36961426916639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our field has recently witnessed an arms race of neural network-based
trajectory predictors. While these predictors are at the core of many
applications such as autonomous navigation or pedestrian flow simulations,
their adversarial robustness has not been carefully studied. In this paper, we
introduce a socially-attended attack to assess the social understanding of
prediction models in terms of collision avoidance. An attack is a small yet
carefully-crafted perturbations to fail predictors. Technically, we define
collision as a failure mode of the output, and propose hard- and soft-attention
mechanisms to guide our attack. Thanks to our attack, we shed light on the
limitations of the current models in terms of their social understanding. We
demonstrate the strengths of our method on the recent trajectory prediction
models. Finally, we show that our attack can be employed to increase the social
understanding of state-of-the-art models. The code is available online:
https://s-attack.github.io/
Related papers
- Manipulating Trajectory Prediction with Backdoors [94.22382859996453]
We describe and investigate four triggers that could affect trajectory prediction.
The model has good benign performance but is vulnerable to backdoors.
We evaluate a range of defenses against backdoors.
arXiv Detail & Related papers (2023-12-21T14:01:51Z) - Adversarial Attacks Against Uncertainty Quantification [10.655660123083607]
This work focuses on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate.
In particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator.
arXiv Detail & Related papers (2023-09-19T12:54:09Z) - Analyzing the Impact of Adversarial Examples on Explainable Machine
Learning [0.31498833540989407]
Adversarial attacks are a type of attack on machine learning models where an attacker deliberately modifies the inputs to cause the model to make incorrect predictions.
Work on the vulnerability of deep learning models to adversarial attacks has shown that it is very easy to make samples that make a model predict things that it doesn't want to.
In this work, we analyze the impact of model interpretability due to adversarial attacks on text classification problems.
arXiv Detail & Related papers (2023-07-17T08:50:36Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools
Stock Prediction [100.9772316028191]
In this paper, we experiment with a variety of adversarial attack configurations to fool three stock prediction victim models.
Our results show that the proposed attack method can achieve consistent success rates and cause significant monetary loss in trading simulation.
arXiv Detail & Related papers (2022-05-01T05:12:22Z) - Black-box Adversarial Attacks on Network-wide Multi-step Traffic State
Prediction Models [4.353029347463806]
We propose an adversarial attack framework by treating the prediction model as a black-box.
The adversary can oracle the prediction model with any input and obtain corresponding output.
To test the attack effectiveness, two state of the art, graph neural network-based models (GCGRNN and DCRNN) are examined.
arXiv Detail & Related papers (2021-10-17T03:45:35Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.