NaRPA: Navigation and Rendering Pipeline for Astronautics
- URL: http://arxiv.org/abs/2211.01566v1
- Date: Thu, 3 Nov 2022 03:07:21 GMT
- Title: NaRPA: Navigation and Rendering Pipeline for Astronautics
- Authors: Roshan Thomas Eapen, Ramchander Rao Bhaskara, Manoranjan Majji
- Abstract summary: NaRPA is a ray-tracing-based computer graphics engine to model and simulate light transport for space-borne imaging.
In addition to image rendering, the engine also possesses point cloud, depth, and contour map generation capabilities.
- Score: 4.282159812965446
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents Navigation and Rendering Pipeline for Astronautics
(NaRPA) - a novel ray-tracing-based computer graphics engine to model and
simulate light transport for space-borne imaging. NaRPA incorporates lighting
models with attention to atmospheric and shading effects for the synthesis of
space-to-space and ground-to-space virtual observations. In addition to image
rendering, the engine also possesses point cloud, depth, and contour map
generation capabilities to simulate passive and active vision-based sensors and
to facilitate the designing, testing, or verification of visual navigation
algorithms. Physically based rendering capabilities of NaRPA and the efficacy
of the proposed rendering algorithm are demonstrated using applications in
representative space-based environments. A key demonstration includes NaRPA as
a tool for generating stereo imagery and application in 3D coordinate
estimation using triangulation. Another prominent application of NaRPA includes
a novel differentiable rendering approach for image-based attitude estimation
is proposed to highlight the efficacy of the NaRPA engine for simulating
vision-based navigation and guidance operations.
Related papers
- Bridging Domain Gap for Flight-Ready Spaceborne Vision [4.14360329494344]
This work presents Spacecraft Pose Network v3 (SPNv3), a Neural Network (NN) for monocular pose estimation of a known, non-cooperative target spacecraft.
SPNv3 is designed and trained to be computationally efficient while providing robustness to spaceborne images that have not been observed during offline training and validation on the ground.
Experiments demonstrate that the final SPNv3 can achieve state-of-the-art pose accuracy on hardware-in-the-loop images from a robotic testbed while having trained exclusively on computer-generated synthetic images.
arXiv Detail & Related papers (2024-09-18T02:56:50Z) - An Autonomous Vision-Based Algorithm for Interplanetary Navigation [0.0]
Vision-based navigation algorithm is built by combining an orbit determination method with an image processing pipeline.
A novel analytical measurement model is developed providing a first-order approximation of the light-aberration and light-time effects.
Algorithm performance is tested on a high-fidelity, Earth--Mars interplanetary transfer.
arXiv Detail & Related papers (2023-09-18T08:54:29Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - 3D Reconstruction of Non-cooperative Resident Space Objects using
Instant NGP-accelerated NeRF and D-NeRF [0.0]
This work adapts Instant NeRF and D-NeRF, variations of the neural radiance field (NeRF) algorithm to the problem of mapping RSOs in orbit.
The algorithms are evaluated for 3D reconstruction quality and hardware requirements using datasets of images of a spacecraft mock-up.
arXiv Detail & Related papers (2023-01-22T05:26:08Z) - Deep Learning Computer Vision Algorithms for Real-time UAVs On-board
Camera Image Processing [77.34726150561087]
This paper describes how advanced deep learning based computer vision algorithms are applied to enable real-time on-board sensor processing for small UAVs.
All algorithms have been developed using state-of-the-art image processing methods based on deep neural networks.
arXiv Detail & Related papers (2022-11-02T11:10:42Z) - Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from
Depth Maps [66.24554680709417]
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications.
We propose a non-invasive framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera.
arXiv Detail & Related papers (2022-07-06T08:52:12Z) - UAVs Beneath the Surface: Cooperative Autonomy for Subterranean Search
and Rescue in DARPA SubT [5.145696432159643]
This paper presents a novel approach for autonomous cooperating UAVs in search and rescue operations in subterranean domains with complex topology.
The proposed system was ranked second in the Virtual Track of the DARPA SubT Finals as part of the team CTU-CRAS-NORLAB.
The proposed solution also proved to be a robust system for deployment onboard physical UAVs flying in the extremely harsh and confined environment of the real-world competition.
arXiv Detail & Related papers (2022-06-16T13:54:33Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - Occupancy Anticipation for Efficient Exploration and Navigation [97.17517060585875]
We propose occupancy anticipation, where the agent uses its egocentric RGB-D observations to infer the occupancy state beyond the visible regions.
By exploiting context in both the egocentric views and top-down maps our model successfully anticipates a broader map of the environment.
Our approach is the winning entry in the 2020 Habitat PointNav Challenge.
arXiv Detail & Related papers (2020-08-21T03:16:51Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.