Robustness of SAM: Segment Anything Under Corruptions and Beyond
- URL: http://arxiv.org/abs/2306.07713v3
- Date: Mon, 4 Sep 2023 12:36:32 GMT
- Title: Robustness of SAM: Segment Anything Under Corruptions and Beyond
- Authors: Yu Qiao, Chaoning Zhang, Taegoo Kang, Donghun Kim, Chenshuang Zhang,
Choong Seon Hong
- Abstract summary: Segment anything model (SAM) is claimed to be capable of cutting out any object.
Understanding the robustness of SAM across different corruption scenarios is crucial for its real-world deployment.
- Score: 49.33798965689299
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Segment anything model (SAM), as the name suggests, is claimed to be capable
of cutting out any object and demonstrates impressive zero-shot transfer
performance with the guidance of prompts. However, there is currently a lack of
comprehensive evaluation regarding its robustness under various corruptions.
Understanding the robustness of SAM across different corruption scenarios is
crucial for its real-world deployment. Prior works show that SAM is biased
towards texture (style) rather than shape, motivated by which we start by
investigating its robustness against style transfer, which is synthetic
corruption. Following by interpreting the effects of synthetic corruption as
style changes, we proceed to conduct a comprehensive evaluation for its
robustness against 15 types of common corruption. These corruptions mainly fall
into categories such as digital, noise, weather, and blur, and within each
corruption category, we explore 5 severity levels to simulate real-world
corruption scenarios. Beyond the corruptions, we further assess the robustness
of SAM against local occlusion and local adversarial patch attacks. To the best
of our knowledge, our work is the first of its kind to evaluate the robustness
of SAM under style change, local occlusion, and local adversarial patch
attacks. Given that patch attacks visible to human eyes are easily detectable,
we further assess its robustness against global adversarial attacks that are
imperceptible to human eyes. Overall, this work provides a comprehensive
empirical study of the robustness of SAM, evaluating its performance under
various corruptions and extending the assessment to critical aspects such as
local occlusion, local adversarial patch attacks, and global adversarial
attacks. These evaluations yield valuable insights into the practical
applicability and effectiveness of SAM in addressing real-world challenges.
Related papers
- Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks [26.422616504640786]
We propose a novel individual attack method, Probability Margin Attack (PMA), which defines the adversarial margin in the probability space rather than the logits space.
We create a million-scale dataset, CC1M, and use it to conduct the first million-scale adversarial robustness evaluation of adversarially-trained ImageNet models.
arXiv Detail & Related papers (2024-11-20T10:41:23Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - SAM Meets UAP: Attacking Segment Anything Model With Universal Adversarial Perturbation [61.732503554088524]
We investigate whether it is possible to attack Segment Anything Model (SAM) with image-aversagnostic Universal Adrial Perturbation (UAP)
We propose a novel perturbation-centric framework that results in a UAP generation method based on self-supervised contrastive learning (CL)
The effectiveness of our proposed CL-based UAP generation method is validated by both quantitative and qualitative results.
arXiv Detail & Related papers (2023-10-19T02:49:24Z) - SAM Meets Robotic Surgery: An Empirical Study on Generalization,
Robustness and Adaptation [15.995869434429274]
The Segment Anything Model (SAM) serves as a fundamental model for semantic segmentation.
We examine SAM's robustness and zero-shot generalizability in the field of robotic surgery.
arXiv Detail & Related papers (2023-08-14T14:09:41Z) - On the Robustness of Segment Anything [46.669794757467166]
We aim to study the testing-time robustness of SAM under adversarial scenarios and common corruptions.
We find that SAM exhibits remarkable robustness against various corruptions, except for blur-related corruption.
arXiv Detail & Related papers (2023-05-25T16:28:30Z) - Attack-SAM: Towards Attacking Segment Anything Model With Adversarial
Examples [68.5719552703438]
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks.
Deep vision models are widely recognized as vulnerable to adversarial examples, which fool the model to make wrong predictions with imperceptible perturbation.
This work is the first of its kind to conduct a comprehensive investigation on how to attack SAM with adversarial examples.
arXiv Detail & Related papers (2023-05-01T15:08:17Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Regional Adversarial Training for Better Robust Generalization [35.42873777434504]
We introduce a new adversarial training framework that considers the diversity as well as characteristics of the perturbed points in the vicinity of benign samples.
RAT consistently makes significant improvement on standard adversarial training (SAT), and exhibits better robust generalization.
arXiv Detail & Related papers (2021-09-02T02:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.