Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
- URL: http://arxiv.org/abs/2101.11060v1
- Date: Tue, 26 Jan 2021 19:59:28 GMT
- Title: Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
- Authors: Xinwei Zhao and Matthew C. Stamm
- Abstract summary: One important attack can fool a classifier by placing black and white stickers on an object such as a road sign.
There are currently no defenses designed to protect against this attack.
In this paper, we propose new defenses that can protect against multi-sticker attacks.
- Score: 24.809185168969066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, physical domain adversarial attacks have drawn significant
attention from the machine learning community. One important attack proposed by
Eykholt et al. can fool a classifier by placing black and white stickers on an
object such as a road sign. While this attack may pose a significant threat to
visual classifiers, there are currently no defenses designed to protect against
this attack. In this paper, we propose new defenses that can protect against
multi-sticker attacks. We present defensive strategies capable of operating
when the defender has full, partial, and no prior information about the attack.
By conducting extensive experiments, we show that our proposed defenses can
outperform existing defenses against physical attacks when presented with a
multi-sticker attack.
Related papers
- BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - The Best Defense is a Good Offense: Adversarial Augmentation against
Adversarial Attacks [91.56314751983133]
$A5$ is a framework to craft a defensive perturbation to guarantee that any attack towards the input in hand will fail.
We show effective on-the-fly defensive augmentation with a robustifier network that ignores the ground truth label.
We also show how to apply $A5$ to create certifiably robust physical objects.
arXiv Detail & Related papers (2023-05-23T16:07:58Z) - Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection [0.0]
Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
arXiv Detail & Related papers (2023-02-01T16:03:34Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - Defending Against Stealthy Backdoor Attacks [1.6453255188693543]
Recent works have shown that it is not difficult to attack a natural language processing (NLP) model while defending against them is still a cat-mouse game.
In this work, we present a few defense strategies that can be useful to counter against such an attack.
arXiv Detail & Related papers (2022-05-27T21:38:42Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Fighting Gradients with Gradients: Dynamic Defenses against Adversarial
Attacks [72.59081183040682]
We propose dynamic defenses, to adapt the model and input during testing, by defensive entropy minimization (dent)
dent improves the robustness of adversarially-trained defenses and nominally-trained models against white-box, black-box, and adaptive attacks on CIFAR-10/100 and ImageNet.
arXiv Detail & Related papers (2021-05-18T17:55:07Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.