RefSAM: Efficiently Adapting Segmenting Anything Model for Referring
Video Object Segmentation
- URL: http://arxiv.org/abs/2307.00997v2
- Date: Mon, 2 Oct 2023 02:32:03 GMT
- Title: RefSAM: Efficiently Adapting Segmenting Anything Model for Referring
Video Object Segmentation
- Authors: Yonglin Li and Jing Zhang and Xiao Teng and Long Lan
- Abstract summary: This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to effectively align and fuse the language and vision features.
- Score: 16.83885487855187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Segment Anything Model (SAM) has gained significant attention for its
impressive performance in image segmentation. However, it lacks proficiency in
referring video object segmentation (RVOS) due to the need for precise
user-interactive prompts and a limited understanding of different modalities,
such as language and vision. This paper presents the RefSAM model, which
explores the potential of SAM for RVOS by incorporating multi-view information
from diverse modalities and successive frames at different timestamps in an
online manner. Our proposed approach adapts the original SAM model to enhance
cross-modality learning by employing a lightweight Cross-Modal MLP that
projects the text embedding of the referring expression into sparse and dense
embeddings, serving as user-interactive prompts. Additionally, we have
introduced the hierarchical dense attention module to fuse hierarchical visual
semantic information with sparse embeddings in order to obtain fine-grained
dense embeddings, and an implicit tracking module to generate a track token and
provide historical information for the mask decoder. Furthermore, we employ a
parameter-efficient tuning strategy to effectively align and fuse the language
and vision features. Through comprehensive ablation studies, we demonstrate the
practical and effective design choices of our model. Extensive experiments
conducted on Ref-Youtu-VOS, Ref-DAVIS17, and three referring image segmentation
datasets validate the superiority and effectiveness of our RefSAM model over
existing methods. The code and models will be made publicly at
\href{https://github.com/LancasterLi/RefSAM}{github.com/LancasterLi/RefSAM}.
Related papers
- AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning [61.666973416903005]
Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of prompts.
We propose a novel framework, termed AlignSAM, designed for automatic prompting for aligning SAM to an open context.
arXiv Detail & Related papers (2024-06-01T16:21:39Z) - PosSAM: Panoptic Open-vocabulary Segment Anything [58.72494640363136]
PosSAM is an open-vocabulary panoptic segmentation model that unifies the strengths of the Segment Anything Model (SAM) with the vision-native CLIP model in an end-to-end framework.
We introduce a Mask-Aware Selective Ensembling (MASE) algorithm that adaptively enhances the quality of generated masks and boosts the performance of open-vocabulary classification during inference for each image.
arXiv Detail & Related papers (2024-03-14T17:55:03Z) - Multi-modal Auto-regressive Modeling via Visual Words [96.25078866446053]
We propose the concept of visual words, which maps the visual features to probability distributions over Large Multi-modal Models' vocabulary.
We further explore the distribution of visual features in the semantic space within LMM and the possibility of using text embeddings to represent visual information.
arXiv Detail & Related papers (2024-03-12T14:58:52Z) - VRP-SAM: SAM with Visual Reference Prompt [73.05676082695459]
We propose a novel Visual Reference Prompt (VRP) encoder that empowers the Segment Anything Model (SAM) to utilize annotated reference images as prompts for segmentation.
In essence, VRP-SAM can utilize annotated reference images to comprehend specific objects and perform segmentation of specific objects in target image.
arXiv Detail & Related papers (2024-02-27T17:58:09Z) - Appearance-based Refinement for Object-Centric Motion Segmentation [95.80420062679104]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a simple selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTubeVOS, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Labeling Indoor Scenes with Fusion of Out-of-the-Box Perception Models [4.157013247909771]
We propose to leverage the recent advancements in state-of-the-art models for bottom-up segmentation (SAM), object detection (Detic), and semantic segmentation (MaskFormer)
We aim to develop a cost-effective labeling approach to obtain pseudo-labels for semantic segmentation and object instance detection in indoor environments.
We demonstrate the effectiveness of the proposed approach on the Active Vision dataset and the ADE20K dataset.
arXiv Detail & Related papers (2023-11-17T21:58:26Z) - Semantic-SAM: Segment and Recognize Anything at Any Granularity [83.64686655044765]
We introduce Semantic-SAM, a universal image segmentation model to enable segment and recognize anything at any desired granularity.
We consolidate multiple datasets across three granularities and introduce decoupled classification for objects and parts.
For the multi-granularity capability, we propose a multi-choice learning scheme during training, enabling each click to generate masks at multiple levels.
arXiv Detail & Related papers (2023-07-10T17:59:40Z) - RSPrompter: Learning to Prompt for Remote Sensing Instance Segmentation
based on Visual Foundation Model [29.42043345787285]
We propose a method to learn the generation of appropriate prompts for Segment Anything Model (SAM)
This enables SAM to produce semantically discernible segmentation results for remote sensing images.
We also propose several ongoing derivatives for instance segmentation tasks, drawing on recent advancements within the SAM community, and compare their performance with RSPrompter.
arXiv Detail & Related papers (2023-06-28T14:51:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.