DarkSAM: Fooling Segment Anything Model to Segment Nothing
- URL: http://arxiv.org/abs/2409.17874v1
- Date: Thu, 26 Sep 2024 14:20:14 GMT
- Title: DarkSAM: Fooling Segment Anything Model to Segment Nothing
- Authors: Ziqi Zhou, Yufei Song, Minghui Li, Shengshan Hu, Xianlong Wang, Leo Yu Zhang, Dezhong Yao, Hai Jin,
- Abstract summary: Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks.
We propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack.
Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.
- Score: 25.67725506581337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks. Despite its promising prospect, the vulnerabilities of SAM, especially to universal adversarial perturbation (UAP) have not been thoroughly investigated yet. In this paper, we propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack. We first divide the output of SAM into foreground and background. Then, we design a shadow target strategy to obtain the semantic blueprint of the image as the attack target. DarkSAM is dedicated to fooling SAM by extracting and destroying crucial object features from images in both spatial and frequency domains. In the spatial domain, we disrupt the semantics of both the foreground and background in the image to confuse SAM. In the frequency domain, we further enhance the attack effectiveness by distorting the high-frequency components (i.e., texture information) of the image. Consequently, with a single UAP, DarkSAM renders SAM incapable of segmenting objects across diverse images with varying prompts. Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.
Related papers
- Segment Anything without Supervision [65.93211374889196]
We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation.
UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes.
We show that supervised SAM can also benefit from our self-supervised labels.
arXiv Detail & Related papers (2024-06-28T17:47:32Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - Black-box Targeted Adversarial Attack on Segment Anything (SAM) [24.927514923402775]
This work aims to achieve a targeted adversarial attack (TAA) on Segment Anything Model (SAM)
Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image.
We propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images.
arXiv Detail & Related papers (2023-10-16T02:09:03Z) - When SAM Meets Sonar Images [6.902760999492406]
Segment Anything Model (SAM) has revolutionized the way of segmentation.
SAM's performance may decline when applied to tasks involving domains that differ from natural images.
By employing fine-tuning techniques, SAM exhibits promising capabilities in specific domains, such as medicine and planetary science.
arXiv Detail & Related papers (2023-06-25T03:15:14Z) - On the Robustness of Segment Anything [46.669794757467166]
We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
arXiv Detail & Related papers (2023-05-25T16:28:30Z) - BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks [16.667225643881782]
We present BadSAM, the first backdoor attack on the image segmentation foundation model.
Our preliminary experiments on the CAMO dataset demonstrate the effectiveness of BadSAM.
arXiv Detail & Related papers (2023-05-05T05:39:12Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - Segment anything, from space? [8.126645790463266]
"Segment Anything Model" (SAM) can segment objects in input imagery based on cheap input prompts.
SAM usually achieved recognition accuracy similar to, or sometimes exceeding, vision models that had been trained on the target tasks.
We examine whether SAM's performance extends to overhead imagery problems and help guide the community's response to its development.
arXiv Detail & Related papers (2023-04-25T17:14:36Z) - SAM Struggles in Concealed Scenes -- Empirical Study on "Segment
Anything" [132.31628334155118]
Segment Anything Model (SAM) fosters the foundation models for computer vision.
In this report, we choose three concealed scenes, i.e., camouflaged animals, industrial defects, and medical lesions, to evaluate SAM under unprompted settings.
Our main observation is that SAM looks unskilled in concealed scenes.
arXiv Detail & Related papers (2023-04-12T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.