MaRF: Representing Mars as Neural Radiance Fields
- URL: http://arxiv.org/abs/2212.01672v1
- Date: Sat, 3 Dec 2022 18:58:00 GMT
- Title: MaRF: Representing Mars as Neural Radiance Fields
- Authors: Lorenzo Giusti, Josue Garcia, Steven Cozine, Darrick Suen, Christina
Nguyen, Ryan Alimo
- Abstract summary: MaRF is a framework able to synthesize the Martian environment using several collections of images from rover cameras.
It addresses key challenges in planetary surface exploration such as: planetary geology, simulated navigation and shape analysis.
In the experimental section, we demonstrate the environments created from actual Mars datasets captured by Curiosity rover, Perseverance rover and Ingenuity helicopter.
- Score: 1.4680035572775534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The aim of this work is to introduce MaRF, a novel framework able to
synthesize the Martian environment using several collections of images from
rover cameras. The idea is to generate a 3D scene of Mars' surface to address
key challenges in planetary surface exploration such as: planetary geology,
simulated navigation and shape analysis. Although there exist different methods
to enable a 3D reconstruction of Mars' surface, they rely on classical computer
graphics techniques that incur high amounts of computational resources during
the reconstruction process, and have limitations with generalizing
reconstructions to unseen scenes and adapting to new images coming from rover
cameras. The proposed framework solves the aforementioned limitations by
exploiting Neural Radiance Fields (NeRFs), a method that synthesize complex
scenes by optimizing a continuous volumetric scene function using a sparse set
of images. To speed up the learning process, we replaced the sparse set of
rover images with their neural graphics primitives (NGPs), a set of vectors of
fixed length that are learned to preserve the information of the original
images in a significantly smaller size. In the experimental section, we
demonstrate the environments created from actual Mars datasets captured by
Curiosity rover, Perseverance rover and Ingenuity helicopter, all of which are
available on the Planetary Data System (PDS).
Related papers
- SCube: Instant Large-Scale Scene Reconstruction using VoxSplats [55.383993296042526]
We present SCube, a novel method for reconstructing large-scale 3D scenes (geometry, appearance, and semantics) from a sparse set of posed images.
Our method encodes reconstructed scenes using a novel representation VoxSplat, which is a set of 3D Gaussians supported on a high-resolution sparse-voxel scaffold.
arXiv Detail & Related papers (2024-10-26T00:52:46Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - MM-Gaussian: 3D Gaussian-based Multi-modal Fusion for Localization and Reconstruction in Unbounded Scenes [12.973283255413866]
MM-Gaussian is a LiDAR-camera multi-modal fusion system for localization and mapping in unbounded scenes.
We utilize 3D Gaussian point clouds, with the assistance of pixel-level gradient descent, to fully exploit the color information in photos.
To further bolster the robustness of our system, we designed a relocalization module.
arXiv Detail & Related papers (2024-04-05T11:14:19Z) - SiLVR: Scalable Lidar-Visual Reconstruction with Neural Radiance Fields
for Robotic Inspection [4.6102302191645075]
We present a neural-field-based large-scale reconstruction system that fuses lidar and vision data to generate high-quality reconstructions.
We exploit the trajectory from a real-time lidar SLAM system to bootstrap a Structure-from-Motion (SfM) procedure.
We use submapping to scale the system to large-scale environments captured over long trajectories.
arXiv Detail & Related papers (2024-03-11T16:31:25Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - Towards Robust Monocular Visual Odometry for Flying Robots on Planetary
Missions [49.79068659889639]
Ingenuity, that just landed on Mars, will mark the beginning of a new era of exploration unhindered by traversability.
We present an advanced robust monocular odometry algorithm that uses efficient optical flow tracking.
We also present a novel approach to estimate the current risk of scale drift based on a principal component analysis of the relative translation information matrix.
arXiv Detail & Related papers (2021-09-12T12:52:20Z) - Vision-based Neural Scene Representations for Spacecraft [1.0323063834827415]
In advanced mission concepts, spacecraft need to internally model the pose and shape of nearby orbiting objects.
Recent works in neural scene representations show promising results for inferring generic three-dimensional scenes from optical images.
We compare and evaluate the potential of NeRF and GRAF to render novel views and extract the 3D shape of two different spacecraft.
arXiv Detail & Related papers (2021-05-11T08:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.