Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on
Segmentation Models
- URL: http://arxiv.org/abs/2311.14450v1
- Date: Fri, 24 Nov 2023 12:57:34 GMT
- Title: Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on
Segmentation Models
- Authors: Francesco Croce, Matthias Hein
- Abstract summary: General purpose segmentation models are able to generate (semantic) segmentation masks from a variety of prompts.
In particular, input images are pre-processed by an image encoder to obtain embedding vectors which are later used for mask predictions.
We show that even imperceptible perturbations of radius $epsilon=1/255$ are often sufficient to drastically modify the masks predicted with point, box and text prompts.
- Score: 61.46999584579775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: General purpose segmentation models are able to generate (semantic)
segmentation masks from a variety of prompts, including visual (points, boxed,
etc.) and textual (object names) ones. In particular, input images are
pre-processed by an image encoder to obtain embedding vectors which are later
used for mask predictions. Existing adversarial attacks target the end-to-end
tasks, i.e. aim at altering the segmentation mask predicted for a specific
image-prompt pair. However, this requires running an individual attack for each
new prompt for the same image. We propose instead to generate prompt-agnostic
adversarial attacks by maximizing the $\ell_2$-distance, in the latent space,
between the embedding of the original and perturbed images. Since the encoding
process only depends on the image, distorted image representations will cause
perturbations in the segmentation masks for a variety of prompts. We show that
even imperceptible $\ell_\infty$-bounded perturbations of radius
$\epsilon=1/255$ are often sufficient to drastically modify the masks predicted
with point, box and text prompts by recently proposed foundation models for
segmentation. Moreover, we explore the possibility of creating universal, i.e.
non image-specific, attacks which can be readily applied to any input without
further computational cost.
Related papers
- Leveraging Hallucinations to Reduce Manual Prompt Dependency in Promptable Segmentation [74.04806143723597]
We introduce an iterative Prompt-Mask Cycle generation framework (ProMaC) with a prompt generator and a mask generator.
The prompt generator uses a multi-scale chain of thought prompting, initially exploring hallucinations for extracting extended contextual knowledge on a test image.
The generated masks iteratively induce the prompt generator to focus more on task-relevant image areas and reduce irrelevant hallucinations, resulting jointly in better prompts and masks.
arXiv Detail & Related papers (2024-08-27T17:06:22Z) - Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation [68.16510297109872]
Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing.
We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs.
Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
arXiv Detail & Related papers (2023-12-22T02:31:31Z) - DFormer: Diffusion-guided Transformer for Universal Image Segmentation [86.73405604947459]
The proposed DFormer views universal image segmentation task as a denoising process using a diffusion model.
At inference, our DFormer directly predicts the masks and corresponding categories from a set of randomly-generated masks.
Our DFormer outperforms the recent diffusion-based panoptic segmentation method Pix2Seq-D with a gain of 3.6% on MS COCO val 2017 set.
arXiv Detail & Related papers (2023-06-06T06:33:32Z) - Semantic-guided Multi-Mask Image Harmonization [10.27974860479791]
We propose a new semantic-guided multi-mask image harmonization task.
In this work, we propose a novel way to edit the inharmonious images by predicting a series of operator masks.
arXiv Detail & Related papers (2022-07-24T11:48:49Z) - Differentiable Soft-Masked Attention [115.5770357189209]
"Differentiable Soft-Masked Attention" is used for the task of WeaklySupervised Video Object.
We develop a transformer-based network for training, but can also benefit from cycle consistency training on a video with just one annotated frame.
arXiv Detail & Related papers (2022-06-01T02:05:13Z) - Few-shot Semantic Image Synthesis Using StyleGAN Prior [8.528384027684192]
We present a training strategy that performs pseudo labeling of semantic masks using the StyleGAN prior.
Our key idea is to construct a simple mapping between the StyleGAN feature and each semantic class from a few examples of semantic masks.
Although the pseudo semantic masks might be too coarse for previous approaches that require pixel-aligned masks, our framework can synthesize high-quality images from not only dense semantic masks but also sparse inputs such as landmarks and scribbles.
arXiv Detail & Related papers (2021-03-27T11:04:22Z) - Proposal-Free Volumetric Instance Segmentation from Latent
Single-Instance Masks [16.217524435617744]
This work introduces a new proposal-free instance segmentation method.
It builds on single-instance segmentation masks predicted across the entire image in a sliding window style.
In contrast to related approaches, our method concurrently predicts all masks, one for each pixel, and thus resolves any conflict jointly across the entire image.
arXiv Detail & Related papers (2020-09-10T17:09:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.