Embodied Adversarial Attack: A Dynamic Robust Physical Attack in
Autonomous Driving
- URL: http://arxiv.org/abs/2312.09554v2
- Date: Wed, 28 Feb 2024 12:31:52 GMT
- Title: Embodied Adversarial Attack: A Dynamic Robust Physical Attack in
Autonomous Driving
- Authors: Yitong Sun, Yao Huang, Xingxing Wei
- Abstract summary: Embodied Adversarial Attack (EAA) aims to employ the paradigm of embodied intelligence: Perception-Decision-Control.
EAA adopts the laser-a highly manipulable medium to implement physical attacks, and further trains an attack agent with reinforcement learning to make it capable of instantaneously determining the best attack strategy.
A variety of experiments verify the high effectiveness of our method under complex scenes.
- Score: 15.427248934229233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As physical adversarial attacks become extensively applied in unearthing the
potential risk of security-critical scenarios, especially in autonomous
driving, their vulnerability to environmental changes has also been brought to
light. The non-robust nature of physical adversarial attack methods brings
less-than-stable performance consequently. To enhance the robustness of
physical adversarial attacks in the real world, instead of statically
optimizing a robust adversarial example via an off-line training manner like
the existing methods, this paper proposes a brand new robust adversarial attack
framework: Embodied Adversarial Attack (EAA) from the perspective of dynamic
adaptation, which aims to employ the paradigm of embodied intelligence:
Perception-Decision-Control to dynamically adjust the optimal attack strategy
according to the current situations in real time. For the perception module,
given the challenge of needing simulation for the victim's viewpoint, EAA
innovatively devises a Perspective Transformation Network to estimate the
target's transformation from the attacker's perspective. For the decision and
control module, EAA adopts the laser-a highly manipulable medium to implement
physical attacks, and further trains an attack agent with reinforcement
learning to make it capable of instantaneously determining the best attack
strategy based on the perceived information. Finally, we apply our framework to
the autonomous driving scenario. A variety of experiments verify the high
effectiveness of our method under complex scenes.
Related papers
- Embodied Active Defense: Leveraging Recurrent Feedback to Counter Adversarial Patches [37.317604316147985]
The vulnerability of deep neural networks to adversarial patches has motivated numerous defense strategies for boosting model robustness.
We develop Embodied Active Defense (EAD), a proactive defensive strategy that actively contextualizes environmental information to address misaligned adversarial patches in 3D real-world settings.
arXiv Detail & Related papers (2024-03-31T03:02:35Z) - Adversarial Markov Games: On Adaptive Decision-Based Attacks and
Defenses [23.056260309055283]
We show how attacks but also defenses can benefit by it and by learning from each other through interaction.
We demonstrate that active defenses, which control how the system responds, are a necessary complement to model hardening when facing decision-based attacks.
We lay out effective strategies in ensuring the robustness of ML-based systems deployed in the real-world.
arXiv Detail & Related papers (2023-12-20T21:24:52Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - LAS-AT: Adversarial Training with Learnable Attack Strategy [82.88724890186094]
"Learnable attack strategy", dubbed LAS-AT, learns to automatically produce attack strategies to improve the model robustness.
Our framework is composed of a target network that uses AEs for training to improve robustness and a strategy network that produces attack strategies to control the AE generation.
arXiv Detail & Related papers (2022-03-13T10:21:26Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Targeted Attack on Deep RL-based Autonomous Driving with Learned Visual
Patterns [18.694795507945603]
Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks.
This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical object in the environment.
arXiv Detail & Related papers (2021-09-16T04:59:06Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.