An Alternative to WSSS? An Empirical Study of the Segment Anything Model
(SAM) on Weakly-Supervised Semantic Segmentation Problems
- URL: http://arxiv.org/abs/2305.01586v2
- Date: Sun, 18 Jun 2023 10:37:44 GMT
- Title: An Alternative to WSSS? An Empirical Study of the Segment Anything Model
(SAM) on Weakly-Supervised Semantic Segmentation Problems
- Authors: Weixuan Sun, Zheyuan Liu, Yanhao Zhang, Yiran Zhong, Nick Barnes
- Abstract summary: The Segment Anything Model (SAM) has demonstrated exceptional performance and versatility.
This report explores the application of SAM in Weakly-Supervised Semantic (WSSS)
We adapt SAM as the pseudo-label generation pipeline given only the image-level class labels.
- Score: 35.547433613976104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Segment Anything Model (SAM) has demonstrated exceptional performance and
versatility, making it a promising tool for various related tasks. In this
report, we explore the application of SAM in Weakly-Supervised Semantic
Segmentation (WSSS). Particularly, we adapt SAM as the pseudo-label generation
pipeline given only the image-level class labels. While we observed impressive
results in most cases, we also identify certain limitations. Our study includes
performance evaluations on PASCAL VOC and MS-COCO, where we achieved remarkable
improvements over the latest state-of-the-art methods on both datasets. We
anticipate that this report encourages further explorations of adopting SAM in
WSSS, as well as wider real-world applications.
Related papers
- On Efficient Variants of Segment Anything Model: A Survey [63.127753705046]
The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications.
To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy.
This survey provides the first comprehensive review of these efficient SAM variants.
arXiv Detail & Related papers (2024-10-07T11:59:54Z) - Evaluation Study on SAM 2 for Class-agnostic Instance-level Segmentation [2.5524809198548137]
Segment Anything Model (SAM) has demonstrated powerful zero-shot segmentation performance in natural scenes.
Recently released Segment Anything Model 2 (SAM2) has further heightened researchers' expectations towards image segmentation capabilities.
This technique report can drive the emergence of SAM2-based adapters, aiming to enhance the performance ceiling of large vision models on class-agnostic instance segmentation tasks.
arXiv Detail & Related papers (2024-09-04T09:35:09Z) - SAM-SP: Self-Prompting Makes SAM Great Again [11.109389094334894]
Segment Anything Model (SAM) has demonstrated impressive capabilities in zero-shot segmentation tasks.
SAM encounters noticeably degradation performance when applied to specific domains, such as medical images.
We introduce a novel self-prompting based fine-tuning approach, called SAM-SP, tailored for extending the vanilla SAM model.
arXiv Detail & Related papers (2024-08-22T13:03:05Z) - AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - ASAM: Boosting Segment Anything Model with Adversarial Tuning [9.566046692165884]
This paper introduces ASAM, a novel methodology that amplifies a foundation model's performance through adversarial tuning.
We harness the potential of natural adversarial examples, inspired by their successful implementation in natural language processing.
Our approach maintains the photorealism of adversarial examples and ensures alignment with original mask annotations.
arXiv Detail & Related papers (2024-05-01T00:13:05Z) - Boosting Segment Anything Model Towards Open-Vocabulary Learning [69.42565443181017]
Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model.
Despite SAM finding applications and adaptations in various domains, its primary limitation lies in the inability to grasp object semantics.
We present Sambor to seamlessly integrate SAM with the open-vocabulary object detector in an end-to-end framework.
arXiv Detail & Related papers (2023-12-06T17:19:00Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering [49.732628643634975]
The Segment Anything Model (SAM), developed by Meta AI Research, offers a robust framework for image and video segmentation.
This survey provides a comprehensive exploration of the SAM family, including SAM and SAM 2, highlighting their advancements in granularity and contextual understanding.
arXiv Detail & Related papers (2023-05-12T07:21:59Z) - Segment Anything Is Not Always Perfect: An Investigation of SAM on
Different Real-world Applications [31.31905890353516]
Recently, Meta AI Research approaches a general, promptable Segment Anything Model (SAM) pre-trained on an unprecedentedly large segmentation dataset (SA-1B)
We conduct a series of intriguing investigations into the performance of SAM across various applications, particularly in the fields of natural images, agriculture, manufacturing, remote sensing, and healthcare.
arXiv Detail & Related papers (2023-04-12T10:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.