Spatiotemporal Attacks for Embodied Agents
- URL: http://arxiv.org/abs/2005.09161v3
- Date: Tue, 17 Nov 2020 14:13:55 GMT
- Title: Spatiotemporal Attacks for Embodied Agents
- Authors: Aishan Liu, Tairan Huang, Xianglong Liu, Yitao Xu, Yuqing Ma, Xinyun
Chen, Stephen J. Maybank, Dacheng Tao
- Abstract summary: We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
- Score: 119.43832001301041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks are valuable for providing insights into the blind-spots
of deep learning models and help improve their robustness. Existing work on
adversarial attacks have mainly focused on static scenes; however, it remains
unclear whether such attacks are effective against embodied agents, which could
navigate and interact with a dynamic environment. In this work, we take the
first step to study adversarial attacks for embodied agents. In particular, we
generate spatiotemporal perturbations to form 3D adversarial examples, which
exploit the interaction history in both the temporal and spatial dimensions.
Regarding the temporal dimension, since agents make predictions based on
historical observations, we develop a trajectory attention module to explore
scene view contributions, which further help localize 3D objects appeared with
the highest stimuli. By conciliating with clues from the temporal dimension,
along the spatial dimension, we adversarially perturb the physical properties
(e.g., texture and 3D shape) of the contextual objects that appeared in the
most important scene views. Extensive experiments on the EQA-v1 dataset for
several embodied tasks in both the white-box and black-box settings have been
conducted, which demonstrate that our perturbations have strong attack and
generalization abilities.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Instantaneous Perception of Moving Objects in 3D [86.38144604783207]
The perception of 3D motion of surrounding traffic participants is crucial for driving safety.
We propose to leverage local occupancy completion of object point clouds to densify the shape cue, and mitigate the impact of swimming artifacts.
Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches.
arXiv Detail & Related papers (2024-05-05T01:07:24Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Adversarial Vulnerability of Temporal Feature Networks for Object
Detection [5.525433572437716]
We study whether temporal feature networks for object detection are vulnerable to universal adversarial attacks.
We evaluate attacks of two types: imperceptible noise for the whole image and locally-bound adversarial patch.
Our experiments on KITTI and nuScenes datasets demonstrate, that a model robustified via K-PGD is able to withstand the studied attacks.
arXiv Detail & Related papers (2022-08-23T07:08:54Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Imperceptible Transfer Attack and Defense on 3D Point Cloud
Classification [12.587561231609083]
We study 3D point cloud attacks from two new and challenging perspectives.
We develop an adversarial transformation model to generate the most harmful distortions and enforce the adversarial examples to resist it.
We train more robust black-box 3D models to defend against such ITA attacks by learning more discriminative point cloud representations.
arXiv Detail & Related papers (2021-11-22T05:07:36Z) - Dual Attention Suppression Attack: Generate Adversarial Camouflage in
Physical World [33.63565658548095]
Motivated by the viewpoint that attention reflects the intrinsic characteristics of the recognition process, this paper proposes the Dual Attention Suppression (DAS) attack.
We generate transferable adversarial camouflages by distracting the model-shared similar attention patterns from the target to non-target regions.
Based on the fact that human visual attention always focuses on salient items, we evade the human-specific bottom-up attention to generate visually-natural camouflages.
arXiv Detail & Related papers (2021-03-01T14:46:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.