Black-box Targeted Adversarial Attack on Segment Anything (SAM)
- URL: http://arxiv.org/abs/2310.10010v2
- Date: Wed, 28 Feb 2024 06:10:39 GMT
- Title: Black-box Targeted Adversarial Attack on Segment Anything (SAM)
- Authors: Sheng Zheng, Chaoning Zhang, Xinhong Hao
- Abstract summary: This work aims to achieve a targeted adversarial attack (TAA) on Segment Anything Model (SAM)
Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image.
We propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images.
- Score: 24.927514923402775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep recognition models are widely vulnerable to adversarial examples, which
change the model output by adding quasi-imperceptible perturbation to the image
input. Recently, Segment Anything Model (SAM) has emerged to become a popular
foundation model in computer vision due to its impressive generalization to
unseen data and tasks. Realizing flexible attacks on SAM is beneficial for
understanding the robustness of SAM in the adversarial context. To this end,
this work aims to achieve a targeted adversarial attack (TAA) on SAM.
Specifically, under a certain prompt, the goal is to make the predicted mask of
an adversarial example resemble that of a given target image. The task of TAA
on SAM has been realized in a recent arXiv work in the white-box setup by
assuming access to prompt and model, which is thus less practical. To address
the issue of prompt dependence, we propose a simple yet effective approach by
only attacking the image encoder. Moreover, we propose a novel regularization
loss to enhance the cross-model transferability by increasing the feature
dominance of adversarial images over random natural images. Extensive
experiments verify the effectiveness of our proposed simple techniques to
conduct a successful black-box TAA on SAM.
Related papers
- Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning [63.55145330447408]
Segment Anything Model (SAM) has made great progress in anomaly segmentation tasks due to its impressive generalization ability.
Existing methods that directly apply SAM through prompting often overlook the domain shift issue.
We propose a novel Self-Perceptinon Tuning (SPT) method, aiming to enhance SAM's perception capability for anomaly segmentation.
arXiv Detail & Related papers (2024-11-26T08:33:25Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - DarkSAM: Fooling Segment Anything Model to Segment Nothing [25.67725506581337]
Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks.
We propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack.
Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.
arXiv Detail & Related papers (2024-09-26T14:20:14Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - ASAM: Boosting Segment Anything Model with Adversarial Tuning [9.566046692165884]
This paper introduces ASAM, a novel methodology that amplifies a foundation model's performance through adversarial tuning.
We harness the potential of natural adversarial examples, inspired by their successful implementation in natural language processing.
Our approach maintains the photorealism of adversarial examples and ensures alignment with original mask annotations.
arXiv Detail & Related papers (2024-05-01T00:13:05Z) - Improving the Generalization of Segmentation Foundation Model under Distribution Shift via Weakly Supervised Adaptation [43.759808066264334]
We propose a weakly supervised self-training architecture with anchor regularization and low-rank finetuning to improve the robustness and efficiency of adaptation.
We validate the effectiveness on 5 types of downstream segmentation tasks including natural clean/corrupted images, medical images, camouflaged images and robotic images.
arXiv Detail & Related papers (2023-12-06T13:59:22Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.