BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks
- URL: http://arxiv.org/abs/2305.03289v1
- Date: Fri, 5 May 2023 05:39:12 GMT
- Title: BadSAM: Exploring Security Vulnerabilities of SAM via Backdoor Attacks
- Authors: Zihan Guan, Mengxuan Hu, Zhongliang Zhou, Jielu Zhang, Sheng Li,
Ninghao Liu
- Abstract summary: We present BadSAM, the first backdoor attack on the image segmentation foundation model.
Our preliminary experiments on the CAMO dataset demonstrate the effectiveness of BadSAM.
- Score: 16.667225643881782
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, the Segment Anything Model (SAM) has gained significant attention
as an image segmentation foundation model due to its strong performance on
various downstream tasks. However, it has been found that SAM does not always
perform satisfactorily when faced with challenging downstream tasks. This has
led downstream users to demand a customized SAM model that can be adapted to
these downstream tasks. In this paper, we present BadSAM, the first backdoor
attack on the image segmentation foundation model. Our preliminary experiments
on the CAMO dataset demonstrate the effectiveness of BadSAM.
Related papers
- Segment Anything without Supervision [65.93211374889196]
We present Unsupervised SAM (UnSAM) for promptable and automatic whole-image segmentation.
UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes.
We show that supervised SAM can also benefit from our self-supervised labels.
arXiv Detail & Related papers (2024-06-28T17:47:32Z) - AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - Black-box Targeted Adversarial Attack on Segment Anything (SAM) [24.927514923402775]
This work aims to achieve a targeted adversarial attack (TAA) on Segment Anything Model (SAM)
Specifically, under a certain prompt, the goal is to make the predicted mask of an adversarial example resemble that of a given target image.
We propose a novel regularization loss to enhance the cross-model transferability by increasing the feature dominance of adversarial images over random natural images.
arXiv Detail & Related papers (2023-10-16T02:09:03Z) - On the Robustness of Segment Anything [46.669794757467166]
We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
arXiv Detail & Related papers (2023-05-25T16:28:30Z) - A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets
Prompt Engineering [37.68799208121957]
Segment anything model (SAM) developed by Meta AI Research has attracted significant attention.
With the relevant papers and projects increasing exponentially, it is challenging for the readers to catch up with the development of SAM.
This work conducts the first yet comprehensive survey on SAM.
arXiv Detail & Related papers (2023-05-12T07:21:59Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - SAM Fails to Segment Anything? -- SAM-Adapter: Adapting SAM in
Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and
More [13.047310918166762]
We propose textbfSAM-Adapter, which incorporates domain-specific information or visual prompts into the segmentation network by using simple yet effective adapters.
We can even outperform task-specific network models and achieve state-of-the-art performance in the task we tested: camouflaged object detection.
arXiv Detail & Related papers (2023-04-18T17:38:54Z) - SAM Struggles in Concealed Scenes -- Empirical Study on "Segment
Anything" [132.31628334155118]
Segment Anything Model (SAM) fosters the foundation models for computer vision.
In this report, we choose three concealed scenes, i.e., camouflaged animals, industrial defects, and medical lesions, to evaluate SAM under unprompted settings.
Our main observation is that SAM looks unskilled in concealed scenes.
arXiv Detail & Related papers (2023-04-12T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.