Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against
Shadow-Based Adversarial Attacks
- URL: http://arxiv.org/abs/2208.09285v1
- Date: Thu, 18 Aug 2022 00:19:01 GMT
- Title: Shadows Aren't So Dangerous After All: A Fast and Robust Defense Against
Shadow-Based Adversarial Attacks
- Authors: Andrew Wang, Wyatt Mayor, Ryan Smith, Gopal Nookula, Gregory Ditzler
- Abstract summary: We propose a robust, fast, and generalizable method to defend against shadow attacks in the context of road sign recognition.
We empirically show its robustness against shadow attacks, and reformulate the problem to show its similarity $varepsilon$-based attacks.
- Score: 2.4254101826561842
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robust classification is essential in tasks like autonomous vehicle sign
recognition, where the downsides of misclassification can be grave. Adversarial
attacks threaten the robustness of neural network classifiers, causing them to
consistently and confidently misidentify road signs. One such class of attack,
shadow-based attacks, causes misidentifications by applying a natural-looking
shadow to input images, resulting in road signs that appear natural to a human
observer but confusing for these classifiers. Current defenses against such
attacks use a simple adversarial training procedure to achieve a rather low
25\% and 40\% robustness on the GTSRB and LISA test sets, respectively. In this
paper, we propose a robust, fast, and generalizable method, designed to defend
against shadow attacks in the context of road sign recognition, that augments
source images with binary adaptive threshold and edge maps. We empirically show
its robustness against shadow attacks, and reformulate the problem to show its
similarity $\varepsilon$ perturbation-based attacks. Experimental results show
that our edge defense results in 78\% robustness while maintaining 98\% benign
test accuracy on the GTSRB test set, with similar results from our threshold
defense. Link to our code is in the paper.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks [21.914674640285337]
This paper focuses on analyzing factors associated with attack success rates (ASR)
We introduce a new attack objective - entity swapping using adversarial suffixes and two gradient-based attack algorithms.
We identify conditions that result in a success probability of 60% for adversarial attacks and others where this likelihood drops below 5%.
arXiv Detail & Related papers (2023-12-22T05:10:32Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Explainable and Trustworthy Traffic Sign Detection for Safe Autonomous
Driving: An Inductive Logic Programming Approach [0.0]
We propose an ILP-based approach for stop sign detection in Autonomous Vehicles.
It is more robust against adversarial attacks, as it mimics human-like perception.
It is able to correctly identify all targeted stop signs, even in the presence of PR2 and ADvCam attacks.
arXiv Detail & Related papers (2023-08-30T09:05:52Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - On Trace of PGD-Like Adversarial Attacks [77.75152218980605]
Adversarial attacks pose safety and security concerns for deep learning applications.
We construct Adrial Response Characteristics (ARC) features to reflect the model's gradient consistency.
Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
arXiv Detail & Related papers (2022-05-19T14:26:50Z) - A Hybrid Defense Method against Adversarial Attacks on Traffic Sign
Classifiers in Autonomous Vehicles [4.585587646404074]
Adversarial attacks can make deep neural network (DNN) models predict incorrect output labels for autonomous vehicles (AVs)
This study develops a resilient traffic sign classifier for AVs that uses a hybrid defense method.
We find that our hybrid defense method achieves 99% average traffic sign classification accuracy for the no attack scenario and 88% average traffic sign classification accuracy for all attack scenarios.
arXiv Detail & Related papers (2022-04-25T02:13:31Z) - Robustness Out of the Box: Compositional Representations Naturally
Defend Against Black-Box Patch Attacks [11.429509031463892]
Patch-based adversarial attacks introduce a perceptible but localized change to the input that induces misclassification.
In this work, we study two different approaches for defending against black-box patch attacks.
We find that adversarial training has limited effectiveness against state-of-the-art location-optimized patch attacks.
arXiv Detail & Related papers (2020-12-01T15:04:23Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Encryption Inspired Adversarial Defense for Visual Classification [17.551718914117917]
We propose a new adversarial defense inspired by image encryption methods.
The proposed method utilizes a block-wise pixel shuffling with a secret key.
It achieves high accuracy (91.55 on clean images and (89.66 on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset)
arXiv Detail & Related papers (2020-05-16T14:18:07Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.