Vision-based Neural Scene Representations for Spacecraft
- URL: http://arxiv.org/abs/2105.06405v1
- Date: Tue, 11 May 2021 08:35:05 GMT
- Title: Vision-based Neural Scene Representations for Spacecraft
- Authors: Anne Mergy, Gurvan Lecuyer, Dawa Derksen, Dario Izzo
- Abstract summary: In advanced mission concepts, spacecraft need to internally model the pose and shape of nearby orbiting objects.
Recent works in neural scene representations show promising results for inferring generic three-dimensional scenes from optical images.
We compare and evaluate the potential of NeRF and GRAF to render novel views and extract the 3D shape of two different spacecraft.
- Score: 1.0323063834827415
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In advanced mission concepts with high levels of autonomy, spacecraft need to
internally model the pose and shape of nearby orbiting objects. Recent works in
neural scene representations show promising results for inferring generic
three-dimensional scenes from optical images. Neural Radiance Fields (NeRF)
have shown success in rendering highly specular surfaces using a large number
of images and their pose. More recently, Generative Radiance Fields (GRAF)
achieved full volumetric reconstruction of a scene from unposed images only,
thanks to the use of an adversarial framework to train a NeRF. In this paper,
we compare and evaluate the potential of NeRF and GRAF to render novel views
and extract the 3D shape of two different spacecraft, the Soil Moisture and
Ocean Salinity satellite of ESA's Living Planet Programme and a generic cube
sat. Considering the best performances of both models, we observe that NeRF has
the ability to render more accurate images regarding the material specularity
of the spacecraft and its pose. For its part, GRAF generates precise novel
views with accurate details even when parts of the satellites are shadowed
while having the significant advantage of not needing any information about the
relative pose.
Related papers
- DreamSat: Towards a General 3D Model for Novel View Synthesis of Space Objects [0.6986413087958454]
We present a novel approach to 3D spacecraft reconstruction from single-view images, DreamSat.
We fine-tune the Zero123 XL, a state-of-the-art single-view reconstruction model, on a high-quality dataset of 190 high-quality spacecraft models.
This approach maintains the efficiency of the DreamGaussian framework while enhancing the accuracy and detail of spacecraft reconstructions.
arXiv Detail & Related papers (2024-10-07T14:51:54Z) - BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling [0.0]
We introduce BRDF-NeRF, which incorporates the physically-based semi-empirical Rahman-Pinty-Verstraete (RPV) BRDF model.
BRDF-NeRF successfully synthesizes novel views from unseen angles and generates high-quality digital surface models.
arXiv Detail & Related papers (2024-09-18T14:28:52Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - rpcPRF: Generalizable MPI Neural Radiance Field for Satellite Camera [0.76146285961466]
This paper presents rpcPRF, a Multiplane Images (MPI) based Planar neural Radiance Field for Rational Polynomial Camera (RPC)
We propose to use reprojection supervision to induce the predicted MPI to learn the correct geometry between the 3D coordinates and the images.
We remove the stringent requirement of dense depth supervision from deep multiview-stereo-based methods by introducing rendering techniques of radiance fields.
arXiv Detail & Related papers (2023-10-11T04:05:11Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - 3D Reconstruction of Non-cooperative Resident Space Objects using
Instant NGP-accelerated NeRF and D-NeRF [0.0]
This work adapts Instant NeRF and D-NeRF, variations of the neural radiance field (NeRF) algorithm to the problem of mapping RSOs in orbit.
The algorithms are evaluated for 3D reconstruction quality and hardware requirements using datasets of images of a spacecraft mock-up.
arXiv Detail & Related papers (2023-01-22T05:26:08Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - SPEED+: Next Generation Dataset for Spacecraft Pose Estimation across
Domain Gap [0.9449650062296824]
This paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap.
SPEED+ includes 9,531 simulated images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility.
TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels.
arXiv Detail & Related papers (2021-10-06T23:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.