Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors
- URL: http://arxiv.org/abs/2210.08870v1
- Date: Mon, 17 Oct 2022 09:07:52 GMT
- Title: Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors
- Authors: Jialiang Sun
- Abstract summary: We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
- Score: 0.190365714903665
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent studies reveal that deep neural network (DNN) based object detectors
are vulnerable to adversarial attacks in the form of adding the perturbation to
the images, leading to the wrong output of object detectors. Most current
existing works focus on generating perturbed images, also called adversarial
examples, to fool object detectors. Though the generated adversarial examples
themselves can remain a certain naturalness, most of them can still be easily
observed by human eyes, which limits their further application in the real
world. To alleviate this problem, we propose a differential evolution based
dual adversarial camouflage (DE_DAC) method, composed of two stages to fool
human eyes and object detectors simultaneously. Specifically, we try to obtain
the camouflage texture, which can be rendered over the surface of the object.
In the first stage, we optimize the global texture to minimize the discrepancy
between the rendered object and the scene images, making human eyes difficult
to distinguish. In the second stage, we design three loss functions to optimize
the local texture, making object detectors ineffective. In addition, we
introduce the differential evolution algorithm to search for the near-optimal
areas of the object to attack, improving the adversarial performance under
certain attack area limitations. Besides, we also study the performance of
adaptive DE_DAC, which can be adapted to the environment. Experiments show that
our proposed method could obtain a good trade-off between the fooling human
eyes and object detectors under multiple specific scenes and objects.
Related papers
- 2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems [8.717726409183175]
We introduce 2D-Malafide, a novel and lightweight adversarial attack designed to deceive face deepfake detection systems.
Unlike traditional additive noise approaches, 2D-Malafide optimises a small number of filter coefficients to generate robust adversarial perturbations.
Experiments, conducted using the FaceForensics++ dataset, demonstrate that 2D-Malafide substantially degrades detection performance in both white-box and black-box settings.
arXiv Detail & Related papers (2024-08-26T09:41:40Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for
Object Detection [0.0]
This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection.
We characterize the degree of "unrelatedness" of an object by the pixel distance between the occurred perturbation and the object.
The result successfully demonstrates that (invisible) perturbations on the right part of the image can drastically change the outcome of object detection on the left.
arXiv Detail & Related papers (2022-11-14T16:07:14Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - CCA: Exploring the Possibility of Contextual Camouflage Attack on Object
Detection [16.384831731988204]
We propose a contextual camouflage attack (CCA) algorithm to in-fluence the performance of object detectors.
In this paper, we usean evolutionary search strategy and adversarial machine learningin interactions with a photo-realistic simulated environment.
Theproposed camouflages are validated effective to most of the state-of-the-art object detectors.
arXiv Detail & Related papers (2020-08-19T06:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.