Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition
- URL: http://arxiv.org/abs/2010.04331v3
- Date: Fri, 13 Aug 2021 01:29:14 GMT
- Title: Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition
- Authors: Xinghao Yang, Weifeng Liu, Shengli Zhang, Wei Liu, Dacheng Tao
- Abstract summary: This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
- Score: 79.50450766097686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real world traffic sign recognition is an important step towards building
autonomous vehicles, most of which highly dependent on Deep Neural Networks
(DNNs). Recent studies demonstrated that DNNs are surprisingly susceptible to
adversarial examples. Many attack methods have been proposed to understand and
generate adversarial examples, such as gradient based attack, score based
attack, decision based attack, and transfer based attacks. However, most of
these algorithms are ineffective in real-world road sign attack, because (1)
iteratively learning perturbations for each frame is not realistic for a fast
moving car and (2) most optimization algorithms traverse all pixels equally
without considering their diverse contribution. To alleviate these problems,
this paper proposes the targeted attention attack (TAA) method for real world
road sign attack. Specifically, we have made the following contributions: (1)
we leverage the soft attention map to highlight those important pixels and skip
those zero-contributed areas - this also helps to generate natural
perturbations, (2) we design an efficient universal attack that optimizes a
single perturbation/noise based on a set of training images under the guidance
of the pre-trained attention map, (3) we design a simple objective function
that can be easily optimized, (4) we evaluate the effectiveness of TAA on real
world data sets. Experimental results validate that the TAA method improves the
attack successful rate (nearly 10%) and reduces the perturbation loss (about a
quarter) compared with the popular RP2 method. Additionally, our TAA also
provides good properties, e.g., transferability and generalization capability.
We provide code and data to ensure the reproducibility:
https://github.com/AdvAttack/RoadSignAttack.
Related papers
- Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Analyzing Robustness of the Deep Reinforcement Learning Algorithm in
Ramp Metering Applications Considering False Data Injection Attack and
Defense [0.0]
Ramp metering is the act of controlling on-going vehicles to the highway mainlines.
Deep Q-Learning algorithm uses only loop detectors information as inputs in this study.
Model can be applied to almost any ramp metering sites regardless of the road geometries and layouts.
arXiv Detail & Related papers (2023-01-28T00:40:46Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles [4.585587646404074]
Adversarial attacks can make deep neural network (DNN) models predict incorrect output labels for autonomous vehicles (AVs)
This study develops a resilient traffic sign classifier for AVs that uses a hybrid defense method.
We find that our hybrid defense method achieves 99% average traffic sign classification accuracy for the no attack scenario and 88% average traffic sign classification accuracy for all attack scenarios.
arXiv Detail & Related papers (2022-04-25T02:13:31Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Progressive Defense Against Adversarial Attacks for Deep Learning as a
Service in Internet of Things [9.753864027359521]
Some Deep Neural Networks (DNN) can be easily misled by adding relatively small but adversarial perturbations to the input.
We present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations.
The result shows it outperforms the state-of-the-art while reducing the cost of model training by 50% on average.
arXiv Detail & Related papers (2020-10-15T06:40:53Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.