Interactive Mars Image Content-Based Search with Interpretable Machine Learning
- URL: http://arxiv.org/abs/2402.16860v1
- Date: Fri, 19 Jan 2024 18:19:40 GMT
- Title: Interactive Mars Image Content-Based Search with Interpretable Machine Learning
- Authors: Bhavan Vasu, Steven Lu, Emily Dunkel, Kiri L. Wagstaff, Kevin Grimes, Michael McAuley,
- Abstract summary: The NASA Planetary Data System (PDS) hosts millions of images of planets, moons, and other bodies collected throughout many missions.
We leverage a prototype-based architecture to enable users to understand and validate the evidence used by a classifier trained on images from the Mars Science Laboratory (MSL) Curiosity rover mission.
The work presented in this paper will be deployed on the PDS Image Atlas, replacing its non-interpretable counterpart.
- Score: 5.370310770047478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The NASA Planetary Data System (PDS) hosts millions of images of planets, moons, and other bodies collected throughout many missions. The ever-expanding nature of data and user engagement demands an interpretable content classification system to support scientific discovery and individual curiosity. In this paper, we leverage a prototype-based architecture to enable users to understand and validate the evidence used by a classifier trained on images from the Mars Science Laboratory (MSL) Curiosity rover mission. In addition to providing explanations, we investigate the diversity and correctness of evidence used by the content-based classifier. The work presented in this paper will be deployed on the PDS Image Atlas, replacing its non-interpretable counterpart.
Related papers
- MSSPlace: Multi-Sensor Place Recognition with Visual and Text Semantics [41.94295877935867]
We study the impact of leveraging a multi-camera setup and integrating diverse data sources for multimodal place recognition.
Our proposed method named MSSPlace utilizes images from multiple cameras, LiDAR point clouds, semantic segmentation masks, and text annotations to generate comprehensive place descriptors.
arXiv Detail & Related papers (2024-07-22T14:24:56Z) - A Semantic Segmentation-guided Approach for Ground-to-Aerial Image Matching [30.324252605889356]
This work addresses the problem of matching a query ground-view image with the corresponding satellite image without GPS data.
This is done by comparing the features from a ground-view image and a satellite one, innovatively leveraging the corresponding latter's segmentation mask through a three-stream Siamese-like network.
The novelty lies in the fusion of satellite images in combination with their semantic segmentation masks, aimed at ensuring that the model can extract useful features and focus on the significant parts of the images.
arXiv Detail & Related papers (2024-04-17T12:13:18Z) - Feature Extraction and Classification from Planetary Science Datasets
enabled by Machine Learning [0.4091406230302996]
We present two examples of recent investigations, applying Machine Learning (ML) neural networks to image datasets from outer planet missions to achieve feature recognition.
We used a transfer learning approach, adding and training new layers to an industry-standard Mask R-CNN to recognize labeled blocks in a training dataset.
In a different application, we applied the Mask R-CNN to recognize clouds on Titan, again through updated training followed by testing against new data, with a precision of 95% over 369 images.
arXiv Detail & Related papers (2023-10-26T11:43:55Z) - Visual Affordance Prediction for Guiding Robot Exploration [56.17795036091848]
We develop an approach for learning visual affordances for guiding robot exploration.
We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE.
We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation.
arXiv Detail & Related papers (2023-05-28T17:53:09Z) - Knowledge distillation with Segment Anything (SAM) model for Planetary
Geological Mapping [0.7266531288894184]
We show the effectiveness of a prompt-based foundation model for rapid annotation and quick adaptability to a prime use case of mapping planetary skylights.
Key results indicate that the use of knowledge distillation can significantly reduce the effort required by domain experts for manual annotation.
This approach has the potential to accelerate extra-terrestrial discovery by automatically detecting and segmenting Martian landforms.
arXiv Detail & Related papers (2023-05-12T16:30:58Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Subspace Representation Learning for Few-shot Image Classification [105.7788602565317]
We propose a subspace representation learning framework to tackle few-shot image classification tasks.
It exploits a subspace in local CNN feature space to represent an image, and measures the similarity between two images according to a weighted subspace distance (WSD)
arXiv Detail & Related papers (2021-05-02T02:29:32Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Mars Image Content Classification: Three Years of NASA Deployment and
Recent Advances [0.431223999943929]
We develop and deploy content-based classification and search capabilities for Mars images.
We describe the process of training, evaluating, calibrating, and deploying updates to two CNN classifiers for images collected by Mars missions.
We report on three years of deployment including usage statistics, lessons learned, and plans for the future.
arXiv Detail & Related papers (2021-02-09T18:26:25Z) - OpenStreetMap: Challenges and Opportunities in Machine Learning and
Remote Sensing [66.23463054467653]
We present a review of recent methods based on machine learning to improve and use OpenStreetMap data.
We believe that OSM can change the way we interpret remote sensing data and that the synergy with machine learning can scale participatory map making.
arXiv Detail & Related papers (2020-07-13T09:58:14Z) - Unsupervised Learning of Landmarks based on Inter-Intra Subject
Consistencies [72.67344725725961]
We present a novel unsupervised learning approach to image landmark discovery by incorporating the inter-subject landmark consistencies on facial images.
This is achieved via an inter-subject mapping module that transforms original subject landmarks based on an auxiliary subject-related structure.
To recover from the transformed images back to the original subject, the landmark detector is forced to learn spatial locations that contain the consistent semantic meanings both for the paired intra-subject images and between the paired inter-subject images.
arXiv Detail & Related papers (2020-04-16T20:38:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.