A Comprehensive Survey on Segment Anything Model for Vision and Beyond
- URL: http://arxiv.org/abs/2305.08196v2
- Date: Fri, 19 May 2023 16:33:03 GMT
- Title: A Comprehensive Survey on Segment Anything Model for Vision and Beyond
- Authors: Chunhui Zhang, Li Liu, Yawen Cui, Guanjie Huang, Weilin Lin, Yiqian
Yang, Yuehong Hu
- Abstract summary: It is urgent to design a general class of models, which we term foundation models, trained on broad data.
The recently proposed segment anything model (SAM) has made significant progress in breaking the boundaries of segmentation.
This paper introduces the background and terminology for foundation models including SAM, as well as state-of-the-art methods contemporaneous with SAM.
- Score: 7.920790211915402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) is evolving towards artificial general
intelligence, which refers to the ability of an AI system to perform a wide
range of tasks and exhibit a level of intelligence similar to that of a human
being. This is in contrast to narrow or specialized AI, which is designed to
perform specific tasks with a high degree of efficiency. Therefore, it is
urgent to design a general class of models, which we term foundation models,
trained on broad data that can be adapted to various downstream tasks. The
recently proposed segment anything model (SAM) has made significant progress in
breaking the boundaries of segmentation, greatly promoting the development of
foundation models for computer vision. To fully comprehend SAM, we conduct a
survey study. As the first to comprehensively review the progress of segmenting
anything task for vision and beyond based on the foundation model of SAM, this
work focuses on its applications to various tasks and data types by discussing
its historical development, recent progress, and profound impact on broad
applications. We first introduce the background and terminology for foundation
models including SAM, as well as state-of-the-art methods contemporaneous with
SAM that are significant for segmenting anything task. Then, we analyze and
summarize the advantages and limitations of SAM across various image processing
applications, including software scenes, real-world scenes, and complex scenes.
Importantly, many insights are drawn to guide future research to develop more
versatile foundation models and improve the architecture of SAM. We also
summarize massive other amazing applications of SAM in vision and beyond.
Finally, we maintain a continuously updated paper list and an open-source
project summary for foundation model SAM at
\href{https://github.com/liliu-avril/Awesome-Segment-Anything}{\color{magenta}{here}}.
Related papers
- On Efficient Variants of Segment Anything Model: A Survey [63.127753705046]
The Segment Anything Model (SAM) is a foundational model for image segmentation tasks, known for its strong generalization across diverse applications.
To address this, a variety of SAM variants have been proposed to enhance efficiency while keeping accuracy.
This survey provides the first comprehensive review of these efficient SAM variants.
arXiv Detail & Related papers (2024-10-07T11:59:54Z) - Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection [58.241593208031816]
Segment Anything Model (SAM) has been proposed as a visual fundamental model, which gives strong segmentation and generalization capabilities.
We propose a Multi-scale and Detail-enhanced SAM (MDSAM) for Salient Object Detection (SOD)
Experimental results demonstrate the superior performance of our model on multiple SOD datasets.
arXiv Detail & Related papers (2024-08-08T09:09:37Z) - Segment Anything for Videos: A Systematic Survey [52.28931543292431]
The recent wave of foundation models has witnessed tremendous success in computer vision (CV) and beyond.
The segment anything model (SAM) has sparked a passion for exploring task-agnostic visual foundation models.
This work conducts a systematic review on SAM for videos in the era of foundation models.
arXiv Detail & Related papers (2024-07-31T02:24:53Z) - AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - ASAM: Boosting Segment Anything Model with Adversarial Tuning [9.566046692165884]
This paper introduces ASAM, a novel methodology that amplifies a foundation model's performance through adversarial tuning.
We harness the potential of natural adversarial examples, inspired by their successful implementation in natural language processing.
Our approach maintains the photorealism of adversarial examples and ensures alignment with original mask annotations.
arXiv Detail & Related papers (2024-05-01T00:13:05Z) - Generalizable Visual Reinforcement Learning with Segment Anything Model [28.172477166023697]
We introduce Segment Anything Model for Generalizable visual RL (SAM-G)
SAM-G is a novel framework that leverages the promptable segmentation ability of Segment Anything Model (SAM) to enhance the generalization capabilities of visual RL agents.
evaluated across 8 DMControl tasks and 3 Adroit tasks, SAM-G significantly improves the visual generalization ability without altering the RL agents' architecture but merely their observations.
arXiv Detail & Related papers (2023-12-28T16:53:23Z) - Boosting Segment Anything Model Towards Open-Vocabulary Learning [69.42565443181017]
Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model.
Despite SAM finding applications and adaptations in various domains, its primary limitation lies in the inability to grasp object semantics.
We present Sambor to seamlessly integrate SAM with the open-vocabulary object detector in an end-to-end framework.
arXiv Detail & Related papers (2023-12-06T17:19:00Z) - A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering [49.732628643634975]
The Segment Anything Model (SAM), developed by Meta AI Research, offers a robust framework for image and video segmentation.
This survey provides a comprehensive exploration of the SAM family, including SAM and SAM 2, highlighting their advancements in granularity and contextual understanding.
arXiv Detail & Related papers (2023-05-12T07:21:59Z) - Segment Anything Is Not Always Perfect: An Investigation of SAM on
Different Real-world Applications [31.31905890353516]
Recently, Meta AI Research approaches a general, promptable Segment Anything Model (SAM) pre-trained on an unprecedentedly large segmentation dataset (SA-1B)
We conduct a series of intriguing investigations into the performance of SAM across various applications, particularly in the fields of natural images, agriculture, manufacturing, remote sensing, and healthcare.
arXiv Detail & Related papers (2023-04-12T10:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.