CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection
- URL: http://arxiv.org/abs/2403.18554v2
- Date: Fri, 12 Apr 2024 02:27:09 GMT
- Title: CosalPure: Learning Concept from Group Images for Robust Co-Saliency Detection
- Authors: Jiayi Zhu, Qing Guo, Felix Juefei-Xu, Yihao Huang, Yang Liu, Geguang Pu,
- Abstract summary: Co-salient object detection (CoSOD) aims to identify the common and salient (usually in the foreground) regions across a given group of images.
adversarial perturbations could be easily affected by some adversarial perturbations, leading to substantial accuracy reduction.
We propose a novel robustness enhancement framework by first learning the concept of the co-salient objects based on the input group images.
- Score: 22.82243087156918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Co-salient object detection (CoSOD) aims to identify the common and salient (usually in the foreground) regions across a given group of images. Although achieving significant progress, state-of-the-art CoSODs could be easily affected by some adversarial perturbations, leading to substantial accuracy reduction. The adversarial perturbations can mislead CoSODs but do not change the high-level semantic information (e.g., concept) of the co-salient objects. In this paper, we propose a novel robustness enhancement framework by first learning the concept of the co-salient objects based on the input group images and then leveraging this concept to purify adversarial perturbations, which are subsequently fed to CoSODs for robustness enhancement. Specifically, we propose CosalPure containing two modules, i.e., group-image concept learning and concept-guided diffusion purification. For the first module, we adopt a pre-trained text-to-image diffusion model to learn the concept of co-salient objects within group images where the learned concept is robust to adversarial examples. For the second module, we map the adversarial image to the latent space and then perform diffusion generation by embedding the learned concept into the noise prediction function as an extra condition. Our method can effectively alleviate the influence of the SOTA adversarial attack containing different adversarial patterns, including exposure and noise. The extensive results demonstrate that our method could enhance the robustness of CoSODs significantly.
Related papers
- Six-CD: Benchmarking Concept Removals for Benign Text-to-image Diffusion Models [58.74606272936636]
Text-to-image (T2I) diffusion models have shown exceptional capabilities in generating images that closely correspond to textual prompts.
The models could be exploited for malicious purposes, such as generating images with violence or nudity, or creating unauthorized portraits of public figures in inappropriate contexts.
concept removal methods have been proposed to modify diffusion models to prevent the generation of malicious and unwanted concepts.
arXiv Detail & Related papers (2024-06-21T03:58:44Z) - ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning [10.201633236997104]
Large-scale text-to-image diffusion models have demonstrated impressive image-generation capabilities.
We present ConceptPrune, wherein we first identify critical regions within pre-trained models responsible for generating undesirable concepts.
Experiments across a range of concepts including artistic styles, nudity, object erasure, and gender debiasing demonstrate that target concepts can be efficiently erased by pruning a tiny fraction.
arXiv Detail & Related papers (2024-05-29T16:19:37Z) - Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient [20.091446060893638]
This paper proposes a concept domain correction framework for unlearning concepts in diffusion models.
By aligning the output domains of sensitive concepts and anchor concepts through adversarial training, we enhance the generalizability of the unlearning results.
arXiv Detail & Related papers (2024-05-24T07:47:36Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Degeneration-Tuning: Using Scrambled Grid shield Unwanted Concepts from
Stable Diffusion [106.42918868850249]
We propose a novel strategy named textbfDegeneration-Tuning (DT) to shield contents of unwanted concepts from SD weights.
As this adaptation occurs at the level of the model's weights, the SD, after DT, can be grafted onto other conditional diffusion frameworks like ControlNet to shield unwanted concepts.
arXiv Detail & Related papers (2023-08-02T03:34:44Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning [36.24651058888557]
We present an effective unpaired learning based image dehazing network from an unpaired set of clear and hazy images.
We show that our method performs favorably against existing state-of-the-art unpaired dehazing approaches.
arXiv Detail & Related papers (2022-03-15T06:45:03Z) - Error Diffusion Halftoning Against Adversarial Examples [85.11649974840758]
Adversarial examples contain carefully crafted perturbations that can fool deep neural networks into making wrong predictions.
We propose a new image transformation defense based on error diffusion halftoning, and combine it with adversarial training to defend against adversarial examples.
arXiv Detail & Related papers (2021-01-23T07:55:02Z) - Can You Spot the Chameleon? Adversarially Camouflaging Images from
Co-Salient Object Detection [46.95646405874199]
Co-salient object detection (CoSOD) has recently achieved significant progress and played a key role in retrieval-related tasks.
In this paper, we identify a novel task: adversarial co-saliency attack.
We propose the very first black-box joint adversarial exposure and noise attack (Jadena)
arXiv Detail & Related papers (2020-09-19T15:43:46Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.