SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment
Anything Model
- URL: http://arxiv.org/abs/2305.02034v4
- Date: Fri, 13 Oct 2023 01:49:42 GMT
- Title: SAMRS: Scaling-up Remote Sensing Segmentation Dataset with Segment
Anything Model
- Authors: Di Wang, Jing Zhang, Bo Du, Minqiang Xu, Lin Liu, Dacheng Tao and
Liangpei Zhang
- Abstract summary: We develop an efficient pipeline for generating a large-scale RS segmentation dataset, dubbed SAMRS.
SAMRS totally possesses 105,090 images and 1,668,241 instances, surpassing existing high-resolution RS segmentation datasets in size by several orders of magnitude.
- Score: 85.85899655118087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of the Segment Anything Model (SAM) demonstrates the significance
of data-centric machine learning. However, due to the difficulties and high
costs associated with annotating Remote Sensing (RS) images, a large amount of
valuable RS data remains unlabeled, particularly at the pixel level. In this
study, we leverage SAM and existing RS object detection datasets to develop an
efficient pipeline for generating a large-scale RS segmentation dataset, dubbed
SAMRS. SAMRS totally possesses 105,090 images and 1,668,241 instances,
surpassing existing high-resolution RS segmentation datasets in size by several
orders of magnitude. It provides object category, location, and instance
information that can be used for semantic segmentation, instance segmentation,
and object detection, either individually or in combination. We also provide a
comprehensive analysis of SAMRS from various aspects. Moreover, preliminary
experiments highlight the importance of conducting segmentation pre-training
with SAMRS to address task discrepancies and alleviate the limitations posed by
limited training data during fine-tuning. The code and dataset will be
available at https://github.com/ViTAE-Transformer/SAMRS.
Related papers
- Prompting DirectSAM for Semantic Contour Extraction in Remote Sensing Images [11.845626002236772]
We introduce a foundation model derived from DirectSAM, termed DirectSAM-RS, which inherits the strong segmentation capability acquired from natural images.
This dataset comprises over 34k image-text-contour triplets, making it at least 30 times larger than individual dataset.
We evaluate the DirectSAM-RS in both zero-shot and fine-tuning setting, and demonstrate that it achieves state-of-the-art performance across several downstream benchmarks.
arXiv Detail & Related papers (2024-10-08T16:55:42Z) - Adapting Segment Anything Model for Unseen Object Instance Segmentation [70.60171342436092]
Unseen Object Instance (UOIS) is crucial for autonomous robots operating in unstructured environments.
We propose UOIS-SAM, a data-efficient solution for the UOIS task.
UOIS-SAM integrates two key components: (i) a Heatmap-based Prompt Generator (HPG) to generate class-agnostic point prompts with precise foreground prediction, and (ii) a Hierarchical Discrimination Network (HDNet) that adapts SAM's mask decoder.
arXiv Detail & Related papers (2024-09-23T19:05:50Z) - Multi-Scale and Detail-Enhanced Segment Anything Model for Salient Object Detection [58.241593208031816]
Segment Anything Model (SAM) has been proposed as a visual fundamental model, which gives strong segmentation and generalization capabilities.
We propose a Multi-scale and Detail-enhanced SAM (MDSAM) for Salient Object Detection (SOD)
Experimental results demonstrate the superior performance of our model on multiple SOD datasets.
arXiv Detail & Related papers (2024-08-08T09:09:37Z) - ALPS: An Auto-Labeling and Pre-training Scheme for Remote Sensing Segmentation With Segment Anything Model [32.91528641298171]
We introduce an innovative auto-labeling framework named ALPS (Automatic Labeling for Pre-training in Pre-training in Remote Sensing)
We leverage the Segment Anything Model (SAM) to predict precise pseudo-labels for RS images without necessitating prior annotations or additional prompts.
Our approach enhances the performance of downstream tasks across various benchmarks, including iSAID and ISPRS Potsdam.
arXiv Detail & Related papers (2024-06-16T09:02:01Z) - Moving Object Segmentation: All You Need Is SAM (and Flow) [82.78026782967959]
We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects.
In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt.
These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks.
arXiv Detail & Related papers (2024-04-18T17:59:53Z) - Adapting SAM for Volumetric X-Ray Data-sets of Arbitrary Sizes [0.0]
We propose a new approach for volumetric instance segmentation in X-ray Computed Tomography (CT) data for Non-Destructive Testing (NDT)
We combine the Segment Anything Model (SAM) with tile-based Flood Filling Networks (FFN)
Our work evaluates the performance of SAM on volumetric NDT data-sets and demonstrates its effectiveness to segment instances in challenging imaging scenarios.
arXiv Detail & Related papers (2024-02-09T17:12:04Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - Semantic Attention and Scale Complementary Network for Instance
Segmentation in Remote Sensing Images [54.08240004593062]
We propose an end-to-end multi-category instance segmentation model, which consists of a Semantic Attention (SEA) module and a Scale Complementary Mask Branch (SCMB)
SEA module contains a simple fully convolutional semantic segmentation branch with extra supervision to strengthen the activation of interest instances on the feature map.
SCMB extends the original single mask branch to trident mask branches and introduces complementary mask supervision at different scales.
arXiv Detail & Related papers (2021-07-25T08:53:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.