BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks
- URL: http://arxiv.org/abs/2305.03289v1
- Date: Fri, 5 May 2023 05:39:12 GMT
- Title: BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks
- Authors: Zihan Guan, Mengxuan Hu, Zhongliang Zhou, Jielu Zhang, Sheng Li,
Ninghao Liu
- Abstract summary: We present BadSAM, the first backdoor attack on the image segmentation foundation model.
Our preliminary experiments on the CAMO dataset demonstrate the effectiveness of BadSAM.
- Score: 16.667225643881782
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, the Segment Anything Model (SAM) has gained significant attention
as an image segmentation foundation model due to its strong performance on
various downstream tasks. However, it has been found that SAM does not always
perform satisfactorily when faced with challenging downstream tasks. This has
led downstream users to demand a customized SAM model that can be adapted to
these downstream tasks. In this paper, we present BadSAM, the first backdoor
attack on the image segmentation foundation model. Our preliminary experiments
on the CAMO dataset demonstrate the effectiveness of BadSAM.
Related papers
- DarkSAM: Fooling Segment Anything Model to Segment Nothing [25.67725506581337]
Segment Anything Model (SAM) has recently gained much attention for its outstanding generalization to unseen data and tasks.
We propose DarkSAM, the first prompt-free universal attack framework against SAM, including a semantic decoupling-based spatial attack and a texture distortion-based frequency attack.
Experimental results on four datasets for SAM and its two variant models demonstrate the powerful attack capability and transferability of DarkSAM.
arXiv Detail & Related papers (2024-09-26T14:20:14Z) - AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - Black-box Targeted Adversarial Attack on Segment Anything (SAM) [24.927514923402775]
This work aims to achieve a targeted adversarial attack (TAA) on Segment Anything Model (SAM)
Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image.
We propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images.
arXiv Detail & Related papers (2023-10-16T02:09:03Z) - On the Robustness of Segment Anything [46.669794757467166]
We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
arXiv Detail & Related papers (2023-05-25T16:28:30Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More [13.047310918166762]
We propose textbfSAM-Adapter, which incorporates domain-specific information or visual prompts into the segmentation network by using simple yet effective adapters.
We can even outperform task-specific network models and achieve state-of-the-art performance in the task we tested: camouflaged object detection.
arXiv Detail & Related papers (2023-04-18T17:38:54Z) - SAM Struggles in Concealed Scenes -- Empirical Study on "Segment
Anything" [132.31628334155118]
Segment Anything Model (SAM) fosters the foundation models for computer vision.
In this report, we choose three concealed scenes, i.e., camouflaged animals, industrial defects, and medical lesions, to evaluate SAM under unprompted settings.
Our main observation is that SAM looks unskilled in concealed scenes.
arXiv Detail & Related papers (2023-04-12T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.