Leveraging Neural Radiance Fields for Pose Estimation of an Unknown Space Object during Proximity Operations
- URL: http://arxiv.org/abs/2405.12728v3
- Date: Tue, 11 Jun 2024 09:42:29 GMT
- Title: Leveraging Neural Radiance Fields for Pose Estimation of an Unknown Space Object during Proximity Operations
- Authors: Antoine Legrand, Renaud Detry, Christophe De Vleeschouwer,
- Abstract summary: We present a novel method that enables an "off-the-shelf" spacecraft pose estimator to be applied on an unknown target.
We train the NeRF model using a sparse collection of images that depict the target, and in turn generate a large dataset that is diverse both in terms of viewpoint and illumination.
We demonstrate that our method successfully enables the training of an off-the-shelf spacecraft pose estimation network from a sparse set of images.
- Score: 14.624172952608653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the estimation of the 6D pose of an unknown target spacecraft relative to a monocular camera, a key step towards the autonomous rendezvous and proximity operations required by future Active Debris Removal missions. We present a novel method that enables an "off-the-shelf" spacecraft pose estimator, which is supposed to known the target CAD model, to be applied on an unknown target. Our method relies on an in-the wild NeRF, i.e., a Neural Radiance Field that employs learnable appearance embeddings to represent varying illumination conditions found in natural scenes. We train the NeRF model using a sparse collection of images that depict the target, and in turn generate a large dataset that is diverse both in terms of viewpoint and illumination. This dataset is then used to train the pose estimation network. We validate our method on the Hardware-In-the-Loop images of SPEED+ that emulate lighting conditions close to those encountered on orbit. We demonstrate that our method successfully enables the training of an off-the-shelf spacecraft pose estimation network from a sparse set of images. Furthermore, we show that a network trained using our method performs similarly to a model trained on synthetic images generated using the CAD model of the target.
Related papers
- StixelNExT: Toward Monocular Low-Weight Perception for Object Segmentation and Free Space Detection [8.684797433797744]
This study leverages the concept of the Stixel-World to recognize a medium level representation of its surroundings.
Our network directly predicts a 2D multi-layer Stixel-World and is capable of recognizing multiple, superimposed objects within an image.
arXiv Detail & Related papers (2024-07-11T08:25:51Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - OccNeRF: Advancing 3D Occupancy Prediction in LiDAR-Free Environments [77.0399450848749]
We propose an OccNeRF method for training occupancy networks without 3D supervision.
We parameterize the reconstructed occupancy fields and reorganize the sampling strategy to align with the cameras' infinite perceptive range.
For semantic occupancy prediction, we design several strategies to polish the prompts and filter the outputs of a pretrained open-vocabulary 2D segmentation model.
arXiv Detail & Related papers (2023-12-14T18:58:52Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Generative Range Imaging for Learning Scene Priors of 3D LiDAR Data [3.9447103367861542]
This paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer.
Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks.
We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models.
arXiv Detail & Related papers (2022-10-21T06:08:39Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - LSPnet: A 2D Localization-oriented Spacecraft Pose Estimation Neural
Network [10.6872574091924]
This work explores a novel methodology, using Convolutional Neural Networks (CNNs) for estimating the pose of uncooperative spacecrafts.
Contrary to other approaches, the proposed CNN directly regresses poses without needing any prior 3D information.
The performed experiments show how this work competes with the state-of-the-art in uncooperative spacecraft pose estimation.
arXiv Detail & Related papers (2021-04-19T12:46:05Z) - Supervised Training of Dense Object Nets using Optimal Descriptors for
Industrial Robotic Applications [57.87136703404356]
Dense Object Nets (DONs) by Florence, Manuelli and Tedrake introduced dense object descriptors as a novel visual object representation for the robotics community.
In this paper we show that given a 3D model of an object, we can generate its descriptor space image, which allows for supervised training of DONs.
We compare the training methods on generating 6D grasps for industrial objects and show that our novel supervised training approach improves the pick-and-place performance in industry-relevant tasks.
arXiv Detail & Related papers (2021-02-16T11:40:12Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - Assistive Relative Pose Estimation for On-orbit Assembly using
Convolutional Neural Networks [0.0]
In this paper, a convolutional neural network is leveraged to determine the translation and rotation of an object of interest relative to the camera.
The simulation framework designed for assembly task is used to generate dataset for training the modified CNN models.
It is shown that the model performs comparable to the current feature-selection methods and can therefore be used in conjunction with them to provide more reliable estimates.
arXiv Detail & Related papers (2020-01-29T02:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.