Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network
- URL: http://arxiv.org/abs/2112.09379v1
- Date: Fri, 17 Dec 2021 08:27:13 GMT
- Title: Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network
- Authors: Adam Radomski, Andreas Georgiou, Thomas Debrunner, Chenghan Li, Luca
Longinotti, Minwon Seo, Moosung Kwak, Chang-Woo Shin, Paul K. J. Park,
Hyunsurk Eric Ryu, Kynan Eng
- Abstract summary: We present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets.
It includes a new frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics.
We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art.
- Score: 1.4095425725284465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fast neuromorphic event-based vision sensors (Dynamic Vision Sensor, DVS) can
be combined with slower conventional frame-based sensors to enable
higher-quality inter-frame interpolation than traditional methods relying on
fixed motion approximations using e.g. optical flow. In this work we present a
new, advanced event simulator that can produce realistic scenes recorded by a
camera rig with an arbitrary number of sensors located at fixed offsets. It
includes a new configurable frame-based image sensor model with realistic image
quality reduction effects, and an extended DVS model with more accurate
characteristics. We use our simulator to train a novel reconstruction model
designed for end-to-end reconstruction of high-fps video. Unlike previously
published methods, our method does not require the frame and DVS cameras to
have the same optics, positions, or camera resolutions. It is also not limited
to objects a fixed distance from the sensor. We show that data generated by our
simulator can be used to train our new model, leading to reconstructed images
on public datasets of equivalent or better quality than the state of the art.
We also show our sensor generalizing to data recorded by real sensors.
Related papers
- Motion Segmentation for Neuromorphic Aerial Surveillance [42.04157319642197]
Event cameras offer superior temporal resolution, superior dynamic range, and minimal power requirements.
Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes.
We introduce a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information.
arXiv Detail & Related papers (2024-05-24T04:36:13Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - An Asynchronous Linear Filter Architecture for Hybrid Event-Frame Cameras [9.69495347826584]
We present an asynchronous linear filter architecture, fusing event and frame camera data, for HDR video reconstruction and spatial convolution.
The proposed AKF pipeline outperforms other state-of-the-art methods in both absolute intensity error (69.4% reduction) and image similarity indexes (average 35.5% improvement)
arXiv Detail & Related papers (2023-09-03T12:37:59Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Video Reconstruction from a Single Motion Blurred Image using Learned
Dynamic Phase Coding [34.76550131783525]
We propose a hybrid optical-digital method for video reconstruction using a single motion-blurred image.
We use a learned dynamic phase-coding in the lens aperture during the image acquisition to encode the motion trajectories.
The proposed computational camera generates a sharp frame burst of the scene at various frame rates from a single coded motion-blurred image.
arXiv Detail & Related papers (2021-12-28T02:06:44Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.