Object Hider: Adversarial Patch Attack Against Object Detectors
- URL: http://arxiv.org/abs/2010.14974v1
- Date: Wed, 28 Oct 2020 13:34:16 GMT
- Title: Object Hider: Adversarial Patch Attack Against Object Detectors
- Authors: Yusheng Zhao, Huanqian Yan, Xingxing Wei
- Abstract summary: In this paper, we focus on adversarial attack on some state-of-the-art object detection models.
As a practical alternative, we use adversarial patches for the attack.
Experiment results have shown that the proposed methods are highly effective, transferable and generic.
- Score: 10.920696684006488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have been widely used in many computer vision tasks.
However, it is proved that they are susceptible to small, imperceptible
perturbations added to the input. Inputs with elaborately designed
perturbations that can fool deep learning models are called adversarial
examples, and they have drawn great concerns about the safety of deep neural
networks. Object detection algorithms are designed to locate and classify
objects in images or videos and they are the core of many computer vision
tasks, which have great research value and wide applications. In this paper, we
focus on adversarial attack on some state-of-the-art object detection models.
As a practical alternative, we use adversarial patches for the attack. Two
adversarial patch generation algorithms have been proposed: the heatmap-based
algorithm and the consensus-based algorithm. The experiment results have shown
that the proposed methods are highly effective, transferable and generic.
Additionally, we have applied the proposed methods to competition "Adversarial
Challenge on Object Detection" that is organized by Alibaba on the Tianchi
platform and won top 7 in 1701 teams. Code is available at:
https://github.com/FenHua/DetDak
Related papers
- Accelerating Object Detection with YOLOv4 for Real-Time Applications [0.276240219662896]
Convolutional Neural Network (CNN) have emerged as a powerful tool for recognizing image content and in computer vision approach for most problems.
This paper introduces the brief introduction of deep learning and object detection framework like Convolutional Neural Network(CNN)
arXiv Detail & Related papers (2024-10-17T17:44:57Z) - A Large-scale Multiple-objective Method for Black-box Attack against
Object Detection [70.00150794625053]
We propose to minimize the true positive rate and maximize the false positive rate, which can encourage more false positive objects to block the generation of new true positive bounding boxes.
We extend the standard Genetic Algorithm with Random Subset selection and Divide-and-Conquer, called GARSDC, which significantly improves the efficiency.
Compared with the state-of-art attack methods, GARSDC decreases by an average 12.0 in the mAP and queries by about 1000 times in extensive experiments.
arXiv Detail & Related papers (2022-09-16T08:36:42Z) - The Weaknesses of Adversarial Camouflage in Overhead Imagery [7.724233098666892]
We build a library of 24 adversarial patches to disguise four different object classes: bus, car, truck, van.
We show that while adversarial patches may fool object detectors, the presence of such patches is often easily uncovered.
This raises the question of whether such patches truly constitute camouflage.
arXiv Detail & Related papers (2022-07-06T20:39:21Z) - Developing Imperceptible Adversarial Patches to Camouflage Military
Assets From Computer Vision Enabled Technologies [0.0]
Convolutional neural networks (CNNs) have demonstrated rapid progress and a high level of success in object detection.
Recent evidence has highlighted their vulnerability to adversarial attacks.
We present a unique method that produces imperceptible patches capable of camouflaging large military assets.
arXiv Detail & Related papers (2022-02-17T20:31:51Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.