Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach
- URL: http://arxiv.org/abs/2309.03215v1
- Date: Wed, 30 Aug 2023 09:05:52 GMT
- Title: Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach
- Authors: Zahra Chaghazardi (University of Surrey), Saber Fallah (University of
Surrey), Alireza Tamaddoni-Nezhad (University of Surrey)
- Abstract summary: We propose an ILP-based approach for stop sign detection in Autonomous Vehicles.
It is more robust against adversarial attacks, as it mimics human-like perception.
It is able to correctly identify all targeted stop signs, even in the presence of PR2 and ADvCam attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic sign detection is a critical task in the operation of Autonomous
Vehicles (AV), as it ensures the safety of all road users. Current DNN-based
sign classification systems rely on pixel-level features to detect traffic
signs and can be susceptible to adversarial attacks. These attacks involve
small, imperceptible changes to a sign that can cause traditional classifiers
to misidentify the sign. We propose an Inductive Logic Programming (ILP) based
approach for stop sign detection in AVs to address this issue. This method
utilises high-level features of a sign, such as its shape, colour, and text, to
detect categories of traffic signs. This approach is more robust against
adversarial attacks, as it mimics human-like perception and is less susceptible
to the limitations of current DNN classifiers. We consider two adversarial
attacking methods to evaluate our approach: Robust Physical Perturbation (PR2)
and Adversarial Camouflage (AdvCam). These attacks are able to deceive DNN
classifiers, causing them to misidentify stop signs as other signs with high
confidence. The results show that the proposed ILP-based technique is able to
correctly identify all targeted stop signs, even in the presence of PR2 and
ADvCam attacks. The proposed learning method is also efficient as it requires
minimal training data. Moreover, it is fully explainable, making it possible to
debug AVs.
Related papers
- A Hybrid Quantum-Classical AI-Based Detection Strategy for Generative Adversarial Network-Based Deepfake Attacks on an Autonomous Vehicle Traffic Sign Classification System [2.962613983209398]
Authors present how a generative adversarial network-based deepfake attack can be crafted to fool the AV traffic sign classification systems.
They develop a deepfake traffic sign image detection strategy leveraging hybrid quantum-classical neural networks (NNs)
The results indicate that the hybrid quantum-classical NNs for deepfake detection could achieve similar or higher performance than the baseline classical convolutional NNs in most cases.
arXiv Detail & Related papers (2024-09-25T19:44:56Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Adversarial Attacks on Traffic Sign Recognition: A Survey [2.658812114255374]
Traffic signs are promising for adversarial attack research due to the ease of performing real-world attacks using printed signs or stickers.
We provide an overview of the latest advancements and highlight the existing research areas that require further investigation.
arXiv Detail & Related papers (2023-07-17T06:58:22Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against
Shadow-Based Adversarial Attacks [2.4254101826561842]
We propose a robust, fast, and generalizable method to defend against shadow attacks in the context of road sign recognition.
We empirically show its robustness against shadow attacks, and reformulate the problem to show its similarity $varepsilon$-based attacks.
arXiv Detail & Related papers (2022-08-18T00:19:01Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles [4.585587646404074]
Adversarial attacks can make deep neural network (DNN) models predict incorrect output labels for autonomous vehicles (AVs)
This study develops a resilient traffic sign classifier for AVs that uses a hybrid defense method.
We find that our hybrid defense method achieves 99% average traffic sign classification accuracy for the no attack scenario and 88% average traffic sign classification accuracy for all attack scenarios.
arXiv Detail & Related papers (2022-04-25T02:13:31Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.