3D Reconstruction of Non-cooperative Resident Space Objects using
Instant NGP-accelerated NeRF and D-NeRF
- URL: http://arxiv.org/abs/2301.09060v3
- Date: Fri, 9 Jun 2023 18:26:58 GMT
- Title: 3D Reconstruction of Non-cooperative Resident Space Objects using
Instant NGP-accelerated NeRF and D-NeRF
- Authors: Basilio Caruso and Trupti Mahendrakar and Van Minh Nguyen and Ryan T.
White and Todd Steffen
- Abstract summary: This work adapts Instant NeRF and D-NeRF, variations of the neural radiance field (NeRF) algorithm to the problem of mapping RSOs in orbit.
The algorithms are evaluated for 3D reconstruction quality and hardware requirements using datasets of images of a spacecraft mock-up.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of non-cooperative resident space objects (RSOs) in orbit
has spurred the demand for active space debris removal, on-orbit servicing
(OOS), classification, and functionality identification of these RSOs. Recent
advances in computer vision have enabled high-definition 3D modeling of objects
based on a set of 2D images captured from different viewing angles. This work
adapts Instant NeRF and D-NeRF, variations of the neural radiance field (NeRF)
algorithm to the problem of mapping RSOs in orbit for the purposes of
functionality identification and assisting with OOS. The algorithms are
evaluated for 3D reconstruction quality and hardware requirements using
datasets of images of a spacecraft mock-up taken under two different lighting
and motion conditions at the Orbital Robotic Interaction, On-Orbit Servicing
and Navigation (ORION) Laboratory at Florida Institute of Technology. Instant
NeRF is shown to learn high-fidelity 3D models with a computational cost that
could feasibly be trained on on-board computers.
Related papers
- Bridging Domain Gap for Flight-Ready Spaceborne Vision [4.14360329494344]
This work presents Spacecraft Pose Network v3 (SPNv3), a Neural Network (NN) for monocular pose estimation of a known, non-cooperative target spacecraft.
SPNv3 is designed and trained to be computationally efficient while providing robustness to spaceborne images that have not been observed during offline training and validation on the ground.
Experiments demonstrate that the final SPNv3 can achieve state-of-the-art pose accuracy on hardware-in-the-loop images from a robotic testbed while having trained exclusively on computer-generated synthetic images.
arXiv Detail & Related papers (2024-09-18T02:56:50Z) - Evaluating geometric accuracy of NeRF reconstructions compared to SLAM method [0.0]
Photogrammetry can perform image-based 3D reconstruction but is computationally expensive and requires extremely dense image representation to recover complex geometry and photorealism.
NeRFs perform 3D scene reconstruction by training a neural network on sparse image and pose data, achieving superior results to photogrammetry with less input data.
This paper presents an evaluation of two NeRF scene reconstructions for the purpose of estimating the diameter of a vertical PVC cylinder.
arXiv Detail & Related papers (2024-07-15T21:04:11Z) - Characterizing Satellite Geometry via Accelerated 3D Gaussian Splatting [0.0]
We present an approach for mapping of satellites on orbit based on 3D Gaussian Splatting.
We demonstrate model training and 3D rendering performance on a hardware-in-the-loop satellite mock-up.
Our model is shown to be capable of training on-board and rendering higher quality novel views of an unknown satellite nearly 2 orders of magnitude faster than previous NeRF-based algorithms.
arXiv Detail & Related papers (2024-01-05T00:49:56Z) - Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit [8.671030148920009]
Spacecraft pose estimation plays a vital role in many on-orbit space missions, such as rendezvous and docking, debris removal, and on-orbit maintenance.
We analyze the radar image characteristics of spacecraft on-orbit, then propose a new deep learning neural Network structure named Dense Residual U-shaped Network (DR-U-Net) to extract image features.
We further introduce a novel neural network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to achieve end-to-end pose estimation for non-cooperative spacecraft.
arXiv Detail & Related papers (2023-02-21T11:14:01Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.