RED: Robust Environmental Design
- URL: http://arxiv.org/abs/2411.17026v1
- Date: Tue, 26 Nov 2024 01:38:51 GMT
- Title: RED: Robust Environmental Design
- Authors: Jinghan Yan,
- Abstract summary: We propose an attacker-agnostic learning scheme to automatically design road signs that are robust to a wide array of patch-based attacks.
Empirical tests conducted in both digital and physical environments demonstrate that our approach significantly reduces vulnerability to patch attacks, outperforming existing techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The classification of road signs by autonomous systems, especially those reliant on visual inputs, is highly susceptible to adversarial attacks. Traditional approaches to mitigating such vulnerabilities have focused on enhancing the robustness of classification models. In contrast, this paper adopts a fundamentally different strategy aimed at increasing robustness through the redesign of road signs themselves. We propose an attacker-agnostic learning scheme to automatically design road signs that are robust to a wide array of patch-based attacks. Empirical tests conducted in both digital and physical environments demonstrate that our approach significantly reduces vulnerability to patch attacks, outperforming existing techniques.
Related papers
- Benchmarking Adversarial Robustness and Adversarial Training Strategies for Object Detection [24.70528833663651]
Object detection models are critical components of automated systems, such as autonomous vehicles and perception-based robots.<n>Progress in defending these models lags behind classification, hindered by a lack of standardized evaluation.<n>It is nearly impossible to thoroughly compare attack or defense methods, as existing work uses different datasets, inconsistent efficiency metrics, and varied measures of perturbation cost.
arXiv Detail & Related papers (2026-02-18T14:33:58Z) - The Outline of Deception: Physical Adversarial Attacks on Traffic Signs Using Edge Patches [6.836569632189732]
This study proposes TESP-Attack, a novel stealth-aware adversarial patch method for traffic sign classification.<n>Based on the observation that human visual attention primarily focuses on the central regions of traffic signs, we employ instance segmentation to generate edge-aligned masks.<n>A U-Net generator is utilized to craft adversarial patches, which are then optimized through color and texture constraints.
arXiv Detail & Related papers (2025-11-30T07:26:07Z) - T2I-Based Physical-World Appearance Attack against Traffic Sign Recognition Systems in Autonomous Driving [40.067678927952336]
Traffic Sign Recognition (TSR) systems play a critical role in Autonomous Driving (AD) systems.<n>Recent research has exposed their vulnerability to physical-world adversarial appearance attacks.<n>We present DiffSign, a novel T2I-based appearance attack framework.
arXiv Detail & Related papers (2025-11-17T04:29:55Z) - LoRA as a Flexible Framework for Securing Large Vision Systems [1.9035583634286277]
Adversarial attacks have emerged as a critical threat to autonomous driving systems.<n>We propose to take insights for parameter efficient fine-tuning and use low-rank adaptation (LoRA) to train a lightweight security patch.<n>We demonstrate that our framework can patch a pre-trained model to improve classification accuracy by up to 78.01% in the presence of adversarial examples.
arXiv Detail & Related papers (2025-05-31T18:16:21Z) - Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection [21.03032944637112]
Adrial attacks pose a critical security threat to real-world AI systems.
This paper proposes a lightweight adversarial detection framework based on the large-scale pre-trained vision-language model CLIP.
arXiv Detail & Related papers (2025-04-01T05:21:45Z) - GAN-Based Single-Stage Defense for Traffic Sign Classification Under Adversarial Patch Attack [3.3296812191509786]
A perception module is vulnerable to adversarial attacks, which can compromise their accuracy and reliability.
One such attack is the adversarial patch attack (APA), a physical attack in which an adversary strategically places a specially crafted sticker on an object to deceive object classifiers.
This study develops a Generative Adversarial Network (GAN)-based single-stage defense strategy for traffic sign classification.
arXiv Detail & Related papers (2025-03-16T16:47:44Z) - Adversarial Training for Defense Against Label Poisoning Attacks [53.893792844055106]
Label poisoning attacks pose significant risks to machine learning models.
We propose a novel adversarial training defense strategy based on support vector machines (SVMs) to counter these threats.
Our approach accommodates various model architectures and employs a projected gradient descent algorithm with kernel SVMs for adversarial training.
arXiv Detail & Related papers (2025-02-24T13:03:19Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Adversarial Robustness Through Artifact Design [2.705610268122756]
We propose a novel approach for improving adversarial robustness.
Specifically, we offer a method to redefine standards, making minor changes to existing ones, to defend against adversarial examples.
We evaluate our approach in the domain of traffic-sign recognition, allowing it to alter traffic-sign pictograms and their colors.
arXiv Detail & Related papers (2024-02-07T08:49:33Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Adversarial Attacks against Face Recognition: A Comprehensive Study [3.766020696203255]
Face recognition (FR) systems have demonstrated outstanding verification performance.
Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images.
arXiv Detail & Related papers (2020-07-22T22:46:00Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.