Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping
- URL: http://arxiv.org/abs/2305.07586v2
- Date: Mon, 15 May 2023 12:46:28 GMT
- Title: Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping
- Authors: Sahib Julka and Michael Granitzer
- Abstract summary: We show the effectiveness of a prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights.
Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation.
This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms.
- Score: 0.7266531288894184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Planetary science research involves analysing vast amounts of remote sensing
data, which are often costly and time-consuming to annotate and process. One of
the essential tasks in this field is geological mapping, which requires
identifying and outlining regions of interest in planetary images, including
geological features and landforms. However, manually labelling these images is
a complex and challenging task that requires significant domain expertise and
effort. To expedite this endeavour, we propose the use of knowledge
distillation using the recently introduced cutting-edge Segment Anything (SAM)
model. We demonstrate the effectiveness of this prompt-based foundation model
for rapid annotation and quick adaptability to a prime use case of mapping
planetary skylights. Our work reveals that with a small set of annotations
obtained with the right prompts from the model and subsequently training a
specialised domain decoder, we can achieve satisfactory semantic segmentation
on this task. Key results indicate that the use of knowledge distillation can
significantly reduce the effort required by domain experts for manual
annotation and improve the efficiency of image segmentation tasks. This
approach has the potential to accelerate extra-terrestrial discovery by
automatically detecting and segmenting Martian landforms.
Related papers
- Federated Multi-Agent Mapping for Planetary Exploration [0.4143603294943439]
We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
arXiv Detail & Related papers (2024-04-02T20:32:32Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - Biological Valuation Map of Flanders: A Sentinel-2 Imagery Analysis [12.025312586542318]
We present a densely labeled ground truth map of Flanders paired with Sentinel-2 satellite imagery.
Our methodology includes a formalized dataset division and sampling method, utilizing the topographic map layout 'Kaartbladversnijdingen,' and a detailed semantic segmentation model training pipeline.
arXiv Detail & Related papers (2024-01-26T22:21:39Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - Self-Supervised Learning to Guide Scientifically Relevant Categorization
of Martian Terrain Images [1.282755489335386]
We present a self-supervised method that can cluster sedimentary textures in images captured from the Mast camera onboard the Curiosity rover.
We then present a qualitative analysis of these clusters and describe their geologic significance via the creation of a set of granular terrain categories.
arXiv Detail & Related papers (2022-04-21T02:48:40Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Improving performance of aircraft detection in satellite imagery while
limiting the labelling effort: Hybrid active learning [0.9379652654427957]
In the defense domain, aircraft detection on satellite imagery is a valuable tool for analysts.
We propose a hybrid clustering active learning method to select the most relevant data to label.
We show that this method can provide better or competitive results compared to other active learning methods.
arXiv Detail & Related papers (2022-02-10T08:24:07Z) - Streaming Self-Training via Domain-Agnostic Unlabeled Images [62.57647373581592]
We present streaming self-training (SST) that aims to democratize the process of learning visual recognition models.
Key to SST are two crucial observations: (1) domain-agnostic unlabeled images enable us to learn better models with a few labeled examples without any additional knowledge or supervision; and (2) learning is a continuous process and can be done by constructing a schedule of learning updates.
arXiv Detail & Related papers (2021-04-07T17:58:39Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z) - Attentive Weakly Supervised land cover mapping for object-based
satellite image time series data with spatial interpretation [4.549831511476249]
We propose a new deep learning framework, named TASSEL, that is able to intelligently exploit the weak supervision provided by the coarse granularity labels.
Our framework also produces an additional side-information that supports the model interpretability with the aim to make the black box gray.
arXiv Detail & Related papers (2020-04-30T10:23:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.