Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples
- URL: http://arxiv.org/abs/2305.00866v2
- Date: Mon, 8 May 2023 07:36:15 GMT
- Title: Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples
- Authors: Chenshuang Zhang, Chaoning Zhang, Taegoo Kang, Donghun Kim, Sung-Ho
Bae, In So Kweon
- Abstract summary: Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
- Score: 68.5719552703438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segment Anything Model (SAM) has attracted significant attention recently,
due to its impressive performance on various downstream tasks in a zero-short
manner. Computer vision (CV) area might follow the natural language processing
(NLP) area to embark on a path from task-specific vision models toward
foundation models. However, deep vision models are widely recognized as
vulnerable to adversarial examples, which fool the model to make wrong
predictions with imperceptible perturbation. Such vulnerability to adversarial
attacks causes serious concerns when applying deep models to security-sensitive
applications. Therefore, it is critical to know whether the vision foundation
model SAM can also be fooled by adversarial attacks. To the best of our
knowledge, our work is the first of its kind to conduct a comprehensive
investigation on how to attack SAM with adversarial examples. With the basic
attack goal set to mask removal, we investigate the adversarial robustness of
SAM in the full white-box setting and transfer-based black-box settings. Beyond
the basic goal of mask removal, we further investigate and find that it is
possible to generate any desired mask by the adversarial attack.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Hiding-in-Plain-Sight (HiPS) Attack on CLIP for Targetted Object Removal from Images [3.537369004801589]
Hiding-in-Plain-Sight (HiPS) attacks subtly modifies model predictions by selectively concealing target object(s)
We propose two HiPS attack variants, HiPS-cls and HiPS-cap, and demonstrate their effectiveness in transferring to downstream image captioning models.
arXiv Detail & Related papers (2024-10-16T20:11:32Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - Black-box Targeted Adversarial Attack on Segment Anything (SAM) [24.927514923402775]
This work aims to achieve a targeted adversarial attack (TAA) on Segment Anything Model (SAM)
Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image.
We propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images.
arXiv Detail & Related papers (2023-10-16T02:09:03Z) - Understanding the Robustness of Randomized Feature Defense Against
Query-Based Adversarial Attacks [23.010308600769545]
Deep neural networks are vulnerable to adversarial examples that find samples close to the original image but can make the model misclassify.
We propose a simple and lightweight defense against black-box attacks by adding random noise to hidden features at intermediate layers of the model at inference time.
Our method effectively enhances the model's resilience against both score-based and decision-based black-box attacks.
arXiv Detail & Related papers (2023-10-01T03:53:23Z) - A Review of Adversarial Attacks in Computer Vision [16.619382559756087]
Adversarial attacks can be invisible to human eyes, but can lead to deep learning misclassification.
Adversarial attacks can be divided into white-box attacks, for which the attacker knows the parameters and gradient of the model, and black-box attacks, for the latter, the attacker can only obtain the input and output of the model.
arXiv Detail & Related papers (2023-08-15T09:43:10Z) - On the Robustness of Segment Anything [46.669794757467166]
We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
arXiv Detail & Related papers (2023-05-25T16:28:30Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.