Mixed-domain Training Improves Multi-Mission Terrain Segmentation
- URL: http://arxiv.org/abs/2209.13674v1
- Date: Tue, 27 Sep 2022 20:25:24 GMT
- Title: Mixed-domain Training Improves Multi-Mission Terrain Segmentation
- Authors: Grace Vincent, Alice Yepremyan, Jingdao Chen, and Edwin Goh
- Abstract summary: Current Martian terrain segmentation models require retraining for deployment across different domains.
This research proposes a semi-supervised learning approach that leverages unsupervised contrastive pretraining of a backbone for a multi-mission semantic segmentation for Martian surfaces.
- Score: 0.9566312408744931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Planetary rover missions must utilize machine learning-based perception to
continue extra-terrestrial exploration with little to no human presence.
Martian terrain segmentation has been critical for rover navigation and hazard
avoidance to perform further exploratory tasks, e.g. soil sample collection and
searching for organic compounds. Current Martian terrain segmentation models
require a large amount of labeled data to achieve acceptable performance, and
also require retraining for deployment across different domains, i.e. different
rover missions, or different tasks, i.e. geological identification and
navigation. This research proposes a semi-supervised learning approach that
leverages unsupervised contrastive pretraining of a backbone for a
multi-mission semantic segmentation for Martian surfaces. This model will
expand upon the current Martian segmentation capabilities by being able to
deploy across different Martian rover missions for terrain navigation, by
utilizing a mixed-domain training set that ensures feature diversity.
Evaluation results of using average pixel accuracy show that a semi-supervised
mixed-domain approach improves accuracy compared to single domain training and
supervised training by reaching an accuracy of 97% for the Mars Science
Laboratory's Curiosity Rover and 79.6% for the Mars 2020 Perseverance Rover.
Further, providing different weighting methods to loss functions improved the
models correct predictions for minority or rare classes by over 30% using the
recall metric compared to standard cross-entropy loss. These results can inform
future multi-mission and multi-task semantic segmentation for rover missions in
a data-efficient manner.
Related papers
- Federated Multi-Agent Mapping for Planetary Exploration [0.4143603294943439]
We propose an approach to jointly train a centralized map model across agents without the need to share raw data.
Our approach leverages implicit neural mapping to generate parsimonious and adaptable representations.
We demonstrate the efficacy of our proposed federated mapping approach using Martian terrains and glacier datasets.
arXiv Detail & Related papers (2024-04-02T20:32:32Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Improving Contrastive Learning on Visually Homogeneous Mars Rover Images [3.206547922373737]
We show how contrastive learning can be applied to hundreds of thousands of unlabeled Mars terrain images.
Contrastive learning assumes that any given pair of distinct images contain distinct semantic content.
We propose two approaches to resolve this: 1) an unsupervised deep clustering step on the Mars datasets, which identifies clusters of images containing similar semantic content and corrects false negative errors during training, and 2) a simple approach which mixes data from different domains to increase visual diversity of the total training dataset.
arXiv Detail & Related papers (2022-10-17T16:26:56Z) - A Neuromorphic Vision-Based Measurement for Robust Relative Localization
in Future Space Exploration Missions [0.0]
This work proposes a robust relative localization system based on a fusion of neuromorphic vision-based measurements (NVBMs) and inertial measurements.
The proposed system was tested in a variety of experiments and has outperformed state-of-the-art approaches in accuracy and range.
arXiv Detail & Related papers (2022-06-23T08:39:05Z) - Semi-Supervised Learning for Mars Imagery Classification and
Segmentation [35.103989798891476]
We introduce a semi-supervised framework for machine vision on Mars.
We try to resolve two specific tasks: classification and segmentation.
Our learning strategies can improve the classification and segmentation models by a large margin and outperform state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-05T13:55:10Z) - Embedding Earth: Self-supervised contrastive pre-training for dense land
cover classification [61.44538721707377]
We present Embedding Earth a self-supervised contrastive pre-training method for leveraging the large availability of satellite imagery.
We observe significant improvements up to 25% absolute mIoU when pre-trained with our proposed method.
We find that learnt features can generalize between disparate regions opening up the possibility of using the proposed pre-training scheme.
arXiv Detail & Related papers (2022-03-11T16:14:14Z) - Mars Terrain Segmentation with Less Labels [1.1745324895296465]
This research proposes a semi-supervised learning framework for Mars terrain segmentation.
It incorporates a backbone module which is trained using a contrastive loss function and an output atrous convolution module.
The proposed model is able to achieve a segmentation accuracy of 91.1% using only 161 training images.
arXiv Detail & Related papers (2022-02-01T22:25:15Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - Moving Object Classification with a Sub-6 GHz Massive MIMO Array using
Real Data [64.48836187884325]
Classification between different activities in an indoor environment using wireless signals is an emerging technology for various applications.
In this paper, we analyze classification of moving objects by employing machine learning on real data from a massive multi-input-multi-output (MIMO) system in an indoor environment.
arXiv Detail & Related papers (2021-02-09T15:48:35Z) - SMART: Simultaneous Multi-Agent Recurrent Trajectory Prediction [72.37440317774556]
We propose advances that address two key challenges in future trajectory prediction.
multimodality in both training data and predictions and constant time inference regardless of number of agents.
arXiv Detail & Related papers (2020-07-26T08:17:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.