Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models
- URL: http://arxiv.org/abs/2005.09294v1
- Date: Tue, 19 May 2020 08:58:58 GMT
- Title: Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models
- Authors: Martin Kotuliak, Sandro E. Schoenborn, Andrei Dan
- Abstract summary: Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are data points misclassified by neural networks.
Originally, adversarial examples were limited to adding small perturbations to
a given image. Recent work introduced the generalized concept of unrestricted
adversarial examples, without limits on the added perturbations. In this paper,
we introduce a new category of attacks that create unrestricted adversarial
examples for object detection. Our key idea is to generate adversarial objects
that are unrelated to the classes identified by the target object detector.
Different from previous attacks, we use off-the-shelf Generative Adversarial
Networks (GAN), without requiring any further training or modification. Our
method consists of searching over the latent normal space of the GAN for
adversarial objects that are wrongly identified by the target object detector.
We evaluate this method on the commonly used Faster R-CNN ResNet-101, Inception
v2 and SSD Mobilenet v1 object detectors using logo generative iWGAN-LC and
SNGAN trained on CIFAR-10. The empirical results show that the generated
adversarial objects are indistinguishable from non-adversarial objects
generated by the GANs, transferable between the object detectors and robust in
the physical world. This is the first work to study unrestricted false positive
adversarial examples for object detection.
Related papers
- Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Open-Set Object Detection Using Classification-free Object Proposal and
Instance-level Contrastive Learning [25.935629339091697]
Open-set object detection (OSOD) is a promising direction to handle the problem consisting of two subtasks: objects and background separation, and open-set object classification.
We present Openset RCNN to address the challenging OSOD.
We show that our Openset RCNN can endow the robot with an open-set perception ability to support robotic rearrangement tasks in cluttered environments.
arXiv Detail & Related papers (2022-11-21T15:00:04Z) - Nowhere to Hide: A Lightweight Unsupervised Detector against Adversarial
Examples [14.332434280103667]
Adversarial examples are generated by adding slight but maliciously crafted perturbations to benign images.
In this paper, we propose an AutoEncoder-based Adversarial Examples detector.
We show empirically that the AEAE is unsupervised and inexpensive against the most state-of-the-art attacks.
arXiv Detail & Related papers (2022-10-16T16:29:47Z) - ADC: Adversarial attacks against object Detection that evade Context
consistency checks [55.8459119462263]
We show that even context consistency checks can be brittle to properly crafted adversarial examples.
We propose an adaptive framework to generate examples that subvert such defenses.
Our results suggest that how to robustly model context and check its consistency, is still an open problem.
arXiv Detail & Related papers (2021-10-24T00:25:09Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Transferable Adversarial Examples for Anchor Free Object Detection [44.7397139463144]
We present the first adversarial attack on anchor-free object detectors.
We leverage high-level semantic information to efficiently generate transferable adversarial examples.
Our proposed method achieves state-of-the-art performance and transferability.
arXiv Detail & Related papers (2021-06-03T06:38:15Z) - Fast Local Attack: Generating Local Adversarial Examples for Object
Detectors [38.813947369401525]
In our work, we leverage higher-level semantic information to generate high aggressive local perturbations for anchor-free object detectors.
As a result, it is less computationally intensive and achieves a higher black-box attack as well as transferring attack performance.
The adversarial examples generated by our method are not only capable of attacking anchor-free object detectors, but also able to be transferred to attack anchor-based object detector.
arXiv Detail & Related papers (2020-10-27T13:49:36Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Category-wise Attack: Transferable Adversarial Examples for Anchor Free
Object Detection [38.813947369401525]
We present an effective and efficient algorithm to generate adversarial examples to attack anchor-free object models.
Surprisingly, the generated adversarial examples it not only able to effectively attack the targeted anchor-free object detector but also to be transferred to attack other object detectors.
arXiv Detail & Related papers (2020-02-10T04:49:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.