Adversarial Attacks on Traffic Sign Recognition: A Survey
- URL: http://arxiv.org/abs/2307.08278v1
- Date: Mon, 17 Jul 2023 06:58:22 GMT
- Title: Adversarial Attacks on Traffic Sign Recognition: A Survey
- Authors: Svetlana Pavlitska, Nico Lambing and J. Marius Z\"ollner
- Abstract summary: Traffic signs are promising for adversarial attack research due to the ease of performing real-world attacks using printed signs or stickers.
We provide an overview of the latest advancements and highlight the existing research areas that require further investigation.
- Score: 2.658812114255374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic sign recognition is an essential component of perception in
autonomous vehicles, which is currently performed almost exclusively with deep
neural networks (DNNs). However, DNNs are known to be vulnerable to adversarial
attacks. Several previous works have demonstrated the feasibility of
adversarial attacks on traffic sign recognition models. Traffic signs are
particularly promising for adversarial attack research due to the ease of
performing real-world attacks using printed signs or stickers. In this work, we
survey existing works performing either digital or real-world attacks on
traffic sign detection and classification models. We provide an overview of the
latest advancements and highlight the existing research areas that require
further investigation.
Related papers
- A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System [2.962613983209398]
Authors present how a generative adversarial network-based deepfake attack can be crafted to fool the AV traffic sign classification systems.
They develop a deepfake traffic sign image detection strategy leveraging hybrid quantum-classical neural networks (NNs)
The results indicate that the hybrid quantum-classical NNs for deepfake detection could achieve similar or higher performance than the baseline classical convolutional NNs in most cases.
arXiv Detail & Related papers (2024-09-25T19:44:56Z) - Secure Traffic Sign Recognition: An Attention-Enabled Universal Image Inpainting Mechanism against Light Patch Attacks [15.915892134535842]
Researchers recently identified a new attack vector to deceive sign recognition systems: projecting well-designed adversarial light patches onto traffic signs.
To effectively counter this security threat, we propose a universal image inpainting mechanism, namely, SafeSign.
It relies on attention-enabled multi-view image fusion to repair traffic signs contaminated by adversarial light patches.
arXiv Detail & Related papers (2024-09-06T08:58:21Z) - Navigating Connected Car Cybersecurity: Location Anomaly Detection with RAN Data [2.147995542780459]
Cyber-attacks, including hijacking and spoofing, pose significant threats to connected cars.
This paper presents a novel approach for identifying potential attacks through Radio Access Network (RAN) event monitoring.
The major contribution of this paper is a location anomaly detection module that identifies devices that appear in multiple locations simultaneously.
arXiv Detail & Related papers (2024-07-02T22:42:45Z) - Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach [0.0]
We propose an ILP-based approach for stop sign detection in Autonomous Vehicles.
It is more robust against adversarial attacks, as it mimics human-like perception.
It is able to correctly identify all targeted stop signs, even in the presence of PR2 and ADvCam attacks.
arXiv Detail & Related papers (2023-08-30T09:05:52Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Investigating the significance of adversarial attacks and their relation
to interpretability for radar-based human activity recognition systems [2.081492937901262]
We show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks.
We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs.
arXiv Detail & Related papers (2021-01-26T05:16:16Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Adversarial Attacks and Defenses on Graphs: A Review, A Tool and
Empirical Studies [73.39668293190019]
Adversary attacks can be easily fooled by small perturbation on the input.
Graph Neural Networks (GNNs) have been demonstrated to inherit this vulnerability.
In this survey, we categorize existing attacks and defenses, and review the corresponding state-of-the-art methods.
arXiv Detail & Related papers (2020-03-02T04:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.