On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing
- URL: http://arxiv.org/abs/2308.00394v1
- Date: Tue, 1 Aug 2023 09:14:20 GMT
- Title: On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing
- Authors: Lo\"ic J. Azzalini and Emmanuel Blazquez and Alexander Hadjiivanov and
Gabriele Meoni and Dario Izzo
- Abstract summary: This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
- Score: 69.34740063574921
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An event-based camera outputs an event whenever a change in scene brightness
of a preset magnitude is detected at a particular pixel location in the sensor
plane. The resulting sparse and asynchronous output coupled with the high
dynamic range and temporal resolution of this novel camera motivate the study
of event-based cameras for navigation and landing applications. However, the
lack of real-world and synthetic datasets to support this line of research has
limited its consideration for onboard use. This paper presents a methodology
and a software pipeline for generating event-based vision datasets from optimal
landing trajectories during the approach of a target body. We construct
sequences of photorealistic images of the lunar surface with the Planet and
Asteroid Natural Scene Generation Utility at different viewpoints along a set
of optimal descent trajectories obtained by varying the boundary conditions.
The generated image sequences are then converted into event streams by means of
an event-based camera emulator. We demonstrate that the pipeline can generate
realistic event-based representations of surface features by constructing a
dataset of 500 trajectories, complete with event streams and motion field
ground truth data. We anticipate that novel event-based vision datasets can be
generated using this pipeline to support various spacecraft pose reconstruction
problems given events as input, and we hope that the proposed methodology would
attract the attention of researchers working at the intersection of
neuromorphic vision and guidance navigation and control.
Related papers
- ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras [33.81592783496106]
Event-based visual odometry aims at solving tracking and mapping sub-problems in parallel.
We build an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry.
arXiv Detail & Related papers (2024-10-12T05:35:27Z) - Research, Applications and Prospects of Event-Based Pedestrian Detection: A Survey [10.494414329120909]
Event-based cameras, inspired by the biological retina, have evolved into cutting-edge sensors distinguished by their minimal power requirements, negligible latency, superior temporal resolution, and expansive dynamic range.
Event-based cameras address limitations by eschewing extraneous data transmissions and obviating motion blur in high-speed imaging scenarios.
This paper offers an exhaustive review of research and applications particularly in the autonomous driving context.
arXiv Detail & Related papers (2024-07-05T06:17:00Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - Tracking Particles Ejected From Active Asteroid Bennu With Event-Based
Vision [6.464577943887317]
The OSIRIS-REx spacecraft relied on the analysis of images captured by onboard navigation cameras to detect particle ejection events.
This work proposes an event-based solution that is dedicated to detection and tracking of centimetre-sized particles.
arXiv Detail & Related papers (2023-09-13T09:07:42Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - Transferable Active Grasping and Real Embodied Dataset [48.887567134129306]
We show how to search for feasible viewpoints for grasping by the use of hand-mounted RGB-D cameras.
A practical 3-stage transferable active grasping pipeline is developed, that is adaptive to unseen clutter scenes.
In our pipeline, we propose a novel mask-guided reward to overcome the sparse reward issue in grasping and ensure category-irrelevant behavior.
arXiv Detail & Related papers (2020-04-28T08:15:35Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.