CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection
- URL: http://arxiv.org/abs/2008.08281v1
- Date: Wed, 19 Aug 2020 06:16:10 GMT
- Title: CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection
- Authors: Shengnan Hu, Yang Zhang, Sumit Laha, Ankit Sharma, Hassan Foroosh
- Abstract summary: We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
- Score: 16.384831731988204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network based object detection hasbecome the cornerstone of many
real-world applications. Alongwith this success comes concerns about its
vulnerability tomalicious attacks. To gain more insight into this issue, we
proposea contextual camouflage attack (CCA for short) algorithm to in-fluence
the performance of object detectors. In this paper, we usean evolutionary
search strategy and adversarial machine learningin interactions with a
photo-realistic simulated environment tofind camouflage patterns that are
effective over a huge varietyof object locations, camera poses, and lighting
conditions. Theproposed camouflages are validated effective to most of the
state-of-the-art object detectors.
Related papers
- Out-of-Bounding-Box Triggers: A Stealthy Approach to Cheat Object Detectors [18.23151352064318]
This paper introduces an inconspicuous adversarial trigger that operates outside the bounding boxes, rendering the object undetectable to the model.
We further enhance this approach by proposing the Feature Guidance (FG) technique and the Universal Auto-PGD (UAPGD) optimization strategy for crafting high-quality triggers.
The effectiveness of our method is validated through extensive empirical testing, demonstrating its high performance in both digital and physical environments.
arXiv Detail & Related papers (2024-10-14T02:15:48Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - Camouflaged Image Synthesis Is All You Need to Boost Camouflaged
Detection [65.8867003376637]
We propose a framework for synthesizing camouflage data to enhance the detection of camouflaged objects in natural scenes.
Our approach employs a generative model to produce realistic camouflage images, which can be used to train existing object detection models.
Our framework outperforms the current state-of-the-art method on three datasets.
arXiv Detail & Related papers (2023-08-13T06:55:05Z) - Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors [0.190365714903665]
We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
arXiv Detail & Related papers (2022-10-17T09:07:52Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Towards Deeper Understanding of Camouflaged Object Detection [64.81987999832032]
We argue that the binary segmentation setting fails to fully understand the concept of camouflage.
We present the first triple-task learning framework to simultaneously localize, segment and rank camouflaged objects.
arXiv Detail & Related papers (2022-05-23T14:26:18Z) - Context-Aware Transfer Attacks for Object Detection [51.65308857232767]
We present a new approach to generate context-aware attacks for object detectors.
We show that by using co-occurrence of objects and their relative locations and sizes as context information, we can successfully generate targeted mis-categorization attacks.
arXiv Detail & Related papers (2021-12-06T18:26:39Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Understanding Object Detection Through An Adversarial Lens [14.976840260248913]
This paper presents a framework for analyzing and evaluating vulnerabilities of deep object detectors under an adversarial lens.
We demonstrate that the proposed framework can serve as a methodical benchmark for analyzing adversarial behaviors and risks in real-time object detection systems.
We conjecture that this framework can also serve as a tool to assess the security risks and the adversarial robustness of deep object detectors to be deployed in real-world applications.
arXiv Detail & Related papers (2020-07-11T18:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.