On the Robustness of Segment Anything
- URL: http://arxiv.org/abs/2305.16220v1
- Date: Thu, 25 May 2023 16:28:30 GMT
- Title: On the Robustness of Segment Anything
- Authors: Yihao Huang, Yue Cao, Tianlin Li, Felix Juefei-Xu, Di Lin, Ivor
W.Tsang, Yang Liu, Qing Guo
- Abstract summary: We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
- Score: 46.669794757467166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segment anything model (SAM) has presented impressive objectness
identification capability with the idea of prompt learning and a new collected
large-scale dataset. Given a prompt (e.g., points, bounding boxes, or masks)
and an input image, SAM is able to generate valid segment masks for all objects
indicated by the prompts, presenting high generalization across diverse
scenarios and being a general method for zero-shot transfer to downstream
vision tasks. Nevertheless, it remains unclear whether SAM may introduce errors
in certain threatening scenarios. Clarifying this is of significant importance
for applications that require robustness, such as autonomous vehicles. In this
paper, we aim to study the testing-time robustness of SAM under adversarial
scenarios and common corruptions. To this end, we first build a testing-time
robustness evaluation benchmark for SAM by integrating existing public
datasets. Second, we extend representative adversarial attacks against SAM and
study the influence of different prompts on robustness. Third, we study the
robustness of SAM under diverse corruption types by evaluating SAM on corrupted
datasets with different prompts. With experiments conducted on SA-1B and KITTI
datasets, we find that SAM exhibits remarkable robustness against various
corruptions, except for blur-related corruption. Furthermore, SAM remains
susceptible to adversarial attacks, particularly when subjected to PGD and BIM
attacks. We think such a comprehensive study could highlight the importance of
the robustness issues of SAM and trigger a series of new tasks for SAM as well
as downstream vision tasks.
Related papers
- Adapting Segment Anything Model for Unseen Object Instance Segmentation [70.60171342436092]
Unseen Object Instance (UOIS) is crucial for autonomous robots operating in unstructured environments.
We propose UOIS-SAM, a data-efficient solution for the UOIS task.
UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder.
arXiv Detail & Related papers (2024-09-23T19:05:50Z) - Crowd-SAM: SAM as a Smart Annotator for Object Detection in Crowded Scenes [18.244508068200236]
Crowd-SAM is a framework designed to enhance SAM's performance in crowded and occluded scenes.
We introduce an efficient prompt sampler (EPS) and a part-whole discrimination network (PWD-Net) to enhance mask selection and accuracy in crowded scenes.
Crowd-SAM rivals state-of-the-art (SOTA) fully-supervised object detection methods on several benchmarks including CrowdHuman and CityPersons.
arXiv Detail & Related papers (2024-07-16T08:00:01Z) - AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - On the Duality Between Sharpness-Aware Minimization and Adversarial Training [14.863336218063646]
Adversarial Training (AT) is one of the most effective defenses against adversarial attacks, yet suffers from inevitably decreased clean accuracy.
Instead of perturbing the samples, Sharpness-Aware Minimization (SAM) perturbs the model weights during training to find a more flat loss landscape.
We find that using SAM alone can improve adversarial robustness.
arXiv Detail & Related papers (2024-02-23T07:22:55Z) - Stable Segment Anything Model [79.9005670886038]
The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts.
This paper presents the first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities.
Our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality.
arXiv Detail & Related papers (2023-11-27T12:51:42Z) - SAM Meets Robotic Surgery: An Empirical Study on Generalization,
Robustness and Adaptation [15.995869434429274]
The Segment Anything Model (SAM) serves as a fundamental model for semantic segmentation.
We examine SAM's robustness and zero-shot generalizability in the field of robotic surgery.
arXiv Detail & Related papers (2023-08-14T14:09:41Z) - Robustness of SAM: Segment Anything Under Corruptions and Beyond [49.33798965689299]
Segment anything model (SAM) is claimed to be capable of cutting out any object.
Understanding the robustness of SAM across different corruption scenarios is crucial for its real-world deployment.
arXiv Detail & Related papers (2023-06-13T12:00:49Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective [21.2080716792596]
Segment Anything Model (SAM) is a foundation model for semantic segmentation.
We investigate the robustness and zero-shot generalizability of the SAM in the domain of robotic surgery.
arXiv Detail & Related papers (2023-04-28T08:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.