Mapping "Brain Terrain" Regions on Mars using Deep Learning
- URL: http://arxiv.org/abs/2311.12292v2
- Date: Fri, 9 Aug 2024 14:50:59 GMT
- Title: Mapping "Brain Terrain" Regions on Mars using Deep Learning
- Authors: Kyle A. Pearson, Eldar Noe, Daniel Zhao, Alphan Altinok, Alex Morgan,
- Abstract summary: A set of critical areas may have seen cycles of ice thawing in the relatively recent past in response to periodic changes in the obliquity of Mars.
In this work, we use convolutional neural networks to detect surface regions containing "Brain Coral" terrain.
We use large images (100-1000 megapixels) from the Mars Reconnaissance Orbiter to search for these landforms at resolutions close to a few tens of centimeters per pixel.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the main objectives of the Mars Exploration Program is to search for evidence of past or current life on the planet. To achieve this, Mars exploration has been focusing on regions that may have liquid or frozen water. A set of critical areas may have seen cycles of ice thawing in the relatively recent past in response to periodic changes in the obliquity of Mars. In this work, we use convolutional neural networks to detect surface regions containing "Brain Coral" terrain, a landform on Mars whose similarity in morphology and scale to sorted stone circles on Earth suggests that it may have formed as a consequence of freeze/thaw cycles. We use large images (~100-1000 megapixels) from the Mars Reconnaissance Orbiter to search for these landforms at resolutions close to a few tens of centimeters per pixel (~25--50 cm). Over 52,000 images (~28 TB) were searched (~5% of the Martian surface) where we found detections in over 200 images. To expedite the processing we leverage a classifier network (prior to segmentation) in the Fourier domain that can take advantage of JPEG compression by leveraging blocks of coefficients from a discrete cosine transform in lieu of decoding the entire image at the full spatial resolution. The hybrid pipeline approach maintains ~93% accuracy while cutting down on ~95% of the total processing time compared to running the segmentation network at the full resolution on every image. The timely processing of big data sets helps inform mission operations, geologic surveys to prioritize candidate landing sites, avoid hazardous areas, or map the spatial extent of certain terrain. The segmentation masks and source code are available on Github for the community to explore and build upon.
Related papers
- SaccadeDet: A Novel Dual-Stage Architecture for Rapid and Accurate Detection in Gigapixel Images [50.742420049839474]
'SaccadeDet' is an innovative architecture for gigapixel-level object detection, inspired by the human eye saccadic movement.
Our approach, evaluated on the PANDA dataset, achieves an 8x speed increase over the state-of-the-art methods.
It also demonstrates significant potential in gigapixel-level pathology analysis through its application to Whole Slide Imaging.
arXiv Detail & Related papers (2024-07-25T11:22:54Z) - EarthMatch: Iterative Coregistration for Fine-grained Localization of Astronaut Photography [18.978718859476267]
We present EarthMatch, an iterative homography estimation method that produces fine-grained localization of astronaut photographs.
We prove our method's efficacy on this dataset and offer a new, fair method for image matcher comparison.
Our method will enable fast and accurate localization of the 4.5 million and growing collection of astronaut photography of Earth.
arXiv Detail & Related papers (2024-05-08T20:46:36Z) - ConeQuest: A Benchmark for Cone Segmentation on Mars [9.036303895516745]
ConeQuest is the first expert-annotated public dataset to identify cones on Mars.
We propose two benchmark tasks using ConeQuest: (i) Spatial Generalization and (ii) Cone-size Generalization.
arXiv Detail & Related papers (2023-11-15T02:33:08Z) - Accurate Gigapixel Crowd Counting by Iterative Zooming and Refinement [90.76576712433595]
GigaZoom iteratively zooms into the densest areas of the image and refines coarser density maps with finer details.
We show that GigaZoom obtains the state-of-the-art for gigapixel crowd counting and improves the accuracy of the next best method by 42%.
arXiv Detail & Related papers (2023-05-16T08:25:27Z) - MaRF: Representing Mars as Neural Radiance Fields [1.4680035572775534]
MaRF is a framework able to synthesize the Martian environment using several collections of images from rover cameras.
It addresses key challenges in planetary surface exploration such as: planetary geology, simulated navigation and shape analysis.
In the experimental section, we demonstrate the environments created from actual Mars datasets captured by Curiosity rover, Perseverance rover and Ingenuity helicopter.
arXiv Detail & Related papers (2022-12-03T18:58:00Z) - Mars Rover Localization Based on A2G Obstacle Distribution Pattern
Matching [0.0]
In NASA's Mars 2020 mission, the Ingenuity helicopter is carried together with the rover.
Traditional image matching methods will struggle to obtain valid image correspondence.
An algorithm combing image-based rock detection and rock distribution pattern matching is used to acquire A2G imagery correspondence.
arXiv Detail & Related papers (2022-10-07T08:29:48Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Accurate 3-DoF Camera Geo-Localization via Ground-to-Satellite Image
Matching [102.39635336450262]
We address the problem of ground-to-satellite image geo-localization by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
Our new method is able to achieve the fine-grained location of a query image, up to pixel size precision of the satellite image.
arXiv Detail & Related papers (2022-03-26T20:10:38Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Rover Relocalization for Mars Sample Return by Virtual Template
Synthesis and Matching [48.0956967976633]
We consider the problem of rover relocalization in the context of the notional Mars Sample Return campaign.
In this campaign, a rover (R1) needs to be capable of autonomously navigating and localizing itself within an area of approximately 50 x 50 m.
We propose a visual localizer that exhibits robustness to the relatively barren terrain that we expect to find in relevant areas.
arXiv Detail & Related papers (2021-03-05T00:18:33Z) - Automated crater detection with human level performance [0.0]
We present an automated Crater Detection Algorithm that is competitive with expert-human researchers and hundreds of times faster.
The algorithm uses multiple neural networks to process digital terrain model and thermal infra-red imagery to identify and locate craters across the surface of Mars.
We find 80% of known craters above 3km in diameter, and identify 7,000 potentially new craters.
arXiv Detail & Related papers (2020-10-23T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.