Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors
- URL: http://arxiv.org/abs/2210.08870v1
- Date: Mon, 17 Oct 2022 09:07:52 GMT
- Title: Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors
- Authors: Jialiang Sun
- Abstract summary: We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
- Score: 0.190365714903665
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent studies reveal that deep neural network (DNN) based object detectors
are vulnerable to adversarial attacks in the form of adding the perturbation to
the images, leading to the wrong output of object detectors. Most current
existing works focus on generating perturbed images, also called adversarial
examples, to fool object detectors. Though the generated adversarial examples
themselves can remain a certain naturalness, most of them can still be easily
observed by human eyes, which limits their further application in the real
world. To alleviate this problem, we propose a differential evolution based
dual adversarial camouflage (DE_DAC) method, composed of two stages to fool
human eyes and object detectors simultaneously. Specifically, we try to obtain
the camouflage texture, which can be rendered over the surface of the object.
In the first stage, we optimize the global texture to minimize the discrepancy
between the rendered object and the scene images, making human eyes difficult
to distinguish. In the second stage, we design three loss functions to optimize
the local texture, making object detectors ineffective. In addition, we
introduce the differential evolution algorithm to search for the near-optimal
areas of the object to attack, improving the adversarial performance under
certain attack area limitations. Besides, we also study the performance of
adaptive DE_DAC, which can be adapted to the environment. Experiments show that
our proposed method could obtain a good trade-off between the fooling human
eyes and object detectors under multiple specific scenes and objects.
Related papers
- Counterfactual Explanations for Face Forgery Detection via Adversarial Removal of Artifacts [23.279652897139286]
Highly realistic AI generated face forgeries known as deepfakes have raised serious social concerns.
We provide counterfactual explanations for face forgery detection from an artifact removal perspective.
Our method achieves over 90% attack success rate and superior attack transferability.
arXiv Detail & Related papers (2024-04-12T09:13:37Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - HODN: Disentangling Human-Object Feature for HOI Detection [51.48164941412871]
We propose a Human and Object Disentangling Network (HODN) to model the Human-Object Interaction (HOI) relationships explicitly.
Considering that human features are more contributive to interaction, we propose a Human-Guide Linking method to make sure the interaction decoder focuses on the human-centric regions.
Our proposed method achieves competitive performance on both the V-COCO and the HICO-Det Linking datasets.
arXiv Detail & Related papers (2023-08-20T04:12:50Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for
Object Detection [0.0]
This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection.
We characterize the degree of "unrelatedness" of an object by the pixel distance between the occurred perturbation and the object.
The result successfully demonstrates that (invisible) perturbations on the right part of the image can drastically change the outcome of object detection on the left.
arXiv Detail & Related papers (2022-11-14T16:07:14Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection [16.384831731988204]
We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
arXiv Detail & Related papers (2020-08-19T06:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.