Can You Spot the Chameleon? Adversarially Camouflaging Images from
Co-Salient Object Detection
- URL: http://arxiv.org/abs/2009.09258v5
- Date: Mon, 18 Apr 2022 02:39:41 GMT
- Title: Can You Spot the Chameleon? Adversarially Camouflaging Images from
Co-Salient Object Detection
- Authors: Ruijun Gao, Qing Guo, Felix Juefei-Xu, Hongkai Yu, Huazhu Fu, Wei
Feng, Yang Liu, Song Wang
- Abstract summary: Co-salient object detection (CoSOD) has recently achieved significant progress and played a key role in retrieval-related tasks.
In this paper, we identify a novel task: adversarial co-saliency attack.
We propose the very first black-box joint adversarial exposure and noise attack (Jadena)
- Score: 46.95646405874199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Co-salient object detection (CoSOD) has recently achieved significant
progress and played a key role in retrieval-related tasks. However, it
inevitably poses an entirely new safety and security issue, i.e., highly
personal and sensitive content can potentially be extracting by powerful CoSOD
methods. In this paper, we address this problem from the perspective of
adversarial attacks and identify a novel task: adversarial co-saliency attack.
Specially, given an image selected from a group of images containing some
common and salient objects, we aim to generate an adversarial version that can
mislead CoSOD methods to predict incorrect co-salient regions. Note that,
compared with general white-box adversarial attacks for classification, this
new task faces two additional challenges: (1) low success rate due to the
diverse appearance of images in the group; (2) low transferability across CoSOD
methods due to the considerable difference between CoSOD pipelines. To address
these challenges, we propose the very first black-box joint adversarial
exposure and noise attack (Jadena), where we jointly and locally tune the
exposure and additive perturbations of the image according to a newly designed
high-feature-level contrast-sensitive loss function. Our method, without any
information on the state-of-the-art CoSOD methods, leads to significant
performance degradation on various co-saliency detection datasets and makes the
co-salient objects undetectable. This can have strong practical benefits in
properly securing the large number of personal photos currently shared on the
Internet. Moreover, our method is potential to be utilized as a metric for
evaluating the robustness of CoSOD methods.
Related papers
- Robust infrared small target detection using self-supervised and a contrario paradigms [1.2224547302812558]
We introduce a novel approach that combines a contrario paradigm with Self-Supervised Learning (SSL) to improve Infrared Small Target Detection (IRSTD)
On the one hand, the integration of an a contrario criterion into a YOLO detection head enhances feature map responses for small and unexpected objects while effectively controlling false alarms.
Our findings show that instance discrimination methods outperform masked image modeling strategies when applied to YOLO-based small object detection.
arXiv Detail & Related papers (2024-10-09T21:08:57Z) - CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection [22.82243087156918]
Co-salient object detection (CoSOD) aims to identify the common and salient (usually in the foreground) regions across a given group of images.
adversarial perturbations could be easily affected by some adversarial perturbations, leading to substantial accuracy reduction.
We propose a novel robustness enhancement framework by first learning the concept of the co-salient objects based on the input group images.
arXiv Detail & Related papers (2024-03-27T13:33:14Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - CGUA: Context-Guided and Unpaired-Assisted Weakly Supervised Person
Search [54.106662998673514]
We introduce a Context-Guided and Unpaired-Assisted (CGUA) weakly supervised person search framework.
Specifically, we propose a novel Context-Guided Cluster (CGC) algorithm to leverage context information in the clustering process.
Our method achieves comparable or better performance to the state-of-the-art supervised methods by leveraging more diverse unlabeled data.
arXiv Detail & Related papers (2022-03-27T13:57:30Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.