Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks
- URL: http://arxiv.org/abs/2209.09883v1
- Date: Tue, 20 Sep 2022 17:36:32 GMT
- Title: Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks
- Authors: Abhishek Aich, Shasha Li, Chengyu Song, M. Salman Asif, Srikanth V.
Krishnamurthy, Amit K. Roy-Chowdhury
- Abstract summary: We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
- Score: 48.66027897216473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art generative model-based attacks against image classifiers
overwhelmingly focus on single-object (i.e., single dominant object) images.
Different from such settings, we tackle a more practical problem of generating
adversarial perturbations using multi-object (i.e., multiple dominant objects)
images as they are representative of most real-world scenes. Our goal is to
design an attack strategy that can learn from such natural scenes by leveraging
the local patch differences that occur inherently in such images (e.g.
difference between the local patch on the object `person' and the object `bike'
in a traffic scene). Our key idea is: to misclassify an adversarial
multi-object image, each local patch in the image should confuse the victim
classifier. Based on this, we propose a novel generative attack (called Local
Patch Difference or LPD-Attack) where a novel contrastive loss function uses
the aforesaid local differences in feature space of multi-object scenes to
optimize the perturbation generator. Through various experiments across diverse
victim convolutional neural networks, we show that our approach outperforms
baseline generative attacks with highly transferable perturbations when
evaluated under different white-box and black-box settings.
Related papers
- DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Unsegment Anything by Simulating Deformation [67.10966838805132]
"Anything Unsegmentable" is a task to grant any image "the right to be unsegmented"
We aim to achieve transferable adversarial attacks against all prompt-based segmentation models.
Our approach focuses on disrupting image encoder features to achieve prompt-agnostic attacks.
arXiv Detail & Related papers (2024-04-03T09:09:42Z) - HEAP: Unsupervised Object Discovery and Localization with Contrastive
Grouping [29.678756772610797]
Unsupervised object discovery and localization aims to detect or segment objects in an image without any supervision.
Recent efforts have demonstrated a notable potential to identify salient foreground objects by utilizing self-supervised transformer features.
To address these problems, we introduce Hierarchical mErging framework via contrAstive grouPing (HEAP)
arXiv Detail & Related papers (2023-12-29T06:46:37Z) - AnyDoor: Zero-shot Object-level Image Customization [63.44307304097742]
This work presents AnyDoor, a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations.
Our model is trained only once and effortlessly generalizes to diverse object-scene combinations at the inference stage.
arXiv Detail & Related papers (2023-07-18T17:59:02Z) - Attacking Object Detector Using A Universal Targeted Label-Switch Patch [44.44676276867374]
Adversarial attacks against deep learning-based object detectors (ODs) have been studied extensively in the past few years.
None of prior research proposed a misclassification attack on ODs, in which the patch is applied on the target object.
We propose a novel, universal, targeted, label-switch attack against the state-of-the-art object detector, YOLO.
arXiv Detail & Related papers (2022-11-16T12:08:58Z) - Object-Attentional Untargeted Adversarial Attack [11.800889173823945]
We propose an object-attentional adversarial attack method for untargeted attack.
Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection region from HVPNet.
Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA)
arXiv Detail & Related papers (2022-10-16T07:45:13Z) - GAMA: Generative Adversarial Multi-Object Scene Attacks [48.33120361498787]
This paper presents the first approach of using generative models for adversarial attacks on multi-object scenes.
We call this attack approach Generative Adversarial Multi-object scene Attacks (GAMA)
arXiv Detail & Related papers (2022-09-20T06:40:54Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.