Generalizable Visual Reinforcement Learning with Segment Anything Model
- URL: http://arxiv.org/abs/2312.17116v1
- Date: Thu, 28 Dec 2023 16:53:23 GMT
- Title: Generalizable Visual Reinforcement Learning with Segment Anything Model
- Authors: Ziyu Wang, Yanjie Ze, Yifei Sun, Zhecheng Yuan, Huazhe Xu
- Abstract summary: We introduce Segment Anything Model for Generalizable visual RL (SAM-G)
SAM-G is a novel framework that leverages the promptable segmentation ability of Segment Anything Model (SAM) to enhance the generalization capabilities of visual RL agents.
evaluated across 8 DMControl tasks and 3 Adroit tasks, SAM-G significantly improves the visual generalization ability without altering the RL agents' architecture but merely their observations.
- Score: 28.172477166023697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning policies that can generalize to unseen environments is a fundamental
challenge in visual reinforcement learning (RL). While most current methods
focus on acquiring robust visual representations through auxiliary supervision,
pre-training, or data augmentation, the potential of modern vision foundation
models remains underleveraged. In this work, we introduce Segment Anything
Model for Generalizable visual RL (SAM-G), a novel framework that leverages the
promptable segmentation ability of Segment Anything Model (SAM) to enhance the
generalization capabilities of visual RL agents. We utilize image features from
DINOv2 and SAM to find correspondence as point prompts to SAM, and then SAM
produces high-quality masked images for agents directly. Evaluated across 8
DMControl tasks and 3 Adroit tasks, SAM-G significantly improves the visual
generalization ability without altering the RL agents' architecture but merely
their observations. Notably, SAM-G achieves 44% and 29% relative improvements
on the challenging video hard setting on DMControl and Adroit respectively,
compared to state-of-the-art methods. Video and code:
https://yanjieze.com/SAM-G/
Related papers
- Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection [58.241593208031816]
Segment Anything Model (SAM) has been proposed as a visual fundamental model, which gives strong segmentation and generalization capabilities.
We propose a Multi-scale and Detail-enhanced SAM (MDSAM) for Salient Object Detection (SOD)
Experimental results demonstrate the superior performance of our model on multiple SOD datasets.
arXiv Detail & Related papers (2024-08-08T09:09:37Z) - ASAM: Boosting Segment Anything Model with Adversarial Tuning [9.566046692165884]
This paper introduces ASAM, a novel methodology that amplifies a foundation model's performance through adversarial tuning.
We harness the potential of natural adversarial examples, inspired by their successful implementation in natural language processing.
Our approach maintains the photorealism of adversarial examples and ensures alignment with original mask annotations.
arXiv Detail & Related papers (2024-05-01T00:13:05Z) - MAS-SAM: Segment Any Marine Animal with Aggregated Features [55.91291540810978]
We propose a novel feature learning framework named MAS-SAM for marine animal segmentation.
Our method enables to extract richer marine information from global contextual cues to fine-grained local details.
arXiv Detail & Related papers (2024-04-24T07:38:14Z) - Deep Instruction Tuning for Segment Anything Model [68.7934961590075]
Segment Anything Model (SAM) has become a research hotspot in the fields of multimedia and computer vision.
SAM can support different types of segmentation prompts, but it performs much worse on text-instructed tasks.
We propose two simple yet effective deep instruction tuning (DIT) methods for SAM, one is end-to-end and the other is layer-wise.
arXiv Detail & Related papers (2024-03-31T11:37:43Z) - Boosting Segment Anything Model Towards Open-Vocabulary Learning [69.42565443181017]
Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model.
Despite SAM finding applications and adaptations in various domains, its primary limitation lies in the inability to grasp object semantics.
We present Sambor to seamlessly integrate SAM with the open-vocabulary object detector in an end-to-end framework.
arXiv Detail & Related papers (2023-12-06T17:19:00Z) - EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment
Anything [36.553867358541154]
Segment Anything Model (SAM) has emerged as a powerful tool for numerous vision applications.
We propose EfficientSAMs, light-weight SAM models that exhibits decent performance with largely reduced complexity.
Our idea is based on leveraging masked image pretraining, SAMI, which learns to reconstruct features from SAM image encoder for effective visual representation learning.
arXiv Detail & Related papers (2023-12-01T18:31:00Z) - Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM) [8.529233820032678]
The Segment Anything Model (SAM) is the first foundation model for image segmentation.
In this study, we evaluate SAM's ability to segment features from eye images recorded in virtual reality setups.
Our investigation centers on SAM's zero-shot learning abilities and the effectiveness of prompts like bounding boxes or point clicks.
arXiv Detail & Related papers (2023-11-14T11:05:08Z) - Adapting Segment Anything Model for Change Detection in HR Remote
Sensing Images [18.371087310792287]
This work aims to utilize the strong visual recognition capabilities of Vision Foundation Models (VFMs) to improve the change detection of high-resolution Remote Sensing Images (RSIs)
We employ the visual encoder of FastSAM, an efficient variant of the SAM, to extract visual representations in RS scenes.
To utilize the semantic representations that are inherent to SAM features, we introduce a task-agnostic semantic learning branch to model the semantic latent in bi-temporal RSIs.
The resulting method, SAMCD, obtains superior accuracy compared to the SOTA methods and exhibits a sample-efficient learning ability that is comparable to semi-
arXiv Detail & Related papers (2023-09-04T08:23:31Z) - RefSAM: Efficiently Adapting Segmenting Anything Model for Referring Video Object Segmentation [53.4319652364256]
This paper presents the RefSAM model, which explores the potential of SAM for referring video object segmentation.
Our proposed approach adapts the original SAM model to enhance cross-modality learning by employing a lightweight Cross-RValModal.
We employ a parameter-efficient tuning strategy to align and fuse the language and vision features effectively.
arXiv Detail & Related papers (2023-07-03T13:21:58Z) - A Comprehensive Survey on Segment Anything Model for Vision and Beyond [7.920790211915402]
It is urgent to design a general class of models, which we term foundation models, trained on broad data.
The recently proposed segment anything model (SAM) has made significant progress in breaking the boundaries of segmentation.
This paper introduces the background and terminology for foundation models including SAM, as well as state-of-the-art methods contemporaneous with SAM.
arXiv Detail & Related papers (2023-05-14T16:23:22Z) - Personalize Segment Anything Model with One Shot [52.54453744941516]
We propose a training-free Personalization approach for Segment Anything Model (SAM)
Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior.
PerSAM segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.
arXiv Detail & Related papers (2023-05-04T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.