Reducing the Sim-to-Real Gap for Event Cameras
- URL: http://arxiv.org/abs/2003.09078v5
- Date: Sat, 22 Aug 2020 05:49:32 GMT
- Title: Reducing the Sim-to-Real Gap for Event Cameras
- Authors: Timo Stoffregen, Cedric Scheerlinck, Davide Scaramuzza, Tom Drummond,
Nick Barnes, Lindsay Kleeman, Robert Mahony
- Abstract summary: Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency.
Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events.
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing video reconstruction networks.
- Score: 64.89183456212069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are paradigm-shifting novel sensors that report asynchronous,
per-pixel brightness changes called 'events' with unparalleled low latency.
This makes them ideal for high speed, high dynamic range scenes where
conventional cameras would fail. Recent work has demonstrated impressive
results using Convolutional Neural Networks (CNNs) for video reconstruction and
optic flow with events. We present strategies for improving training data for
event based CNNs that result in 20-40% boost in performance of existing
state-of-the-art (SOTA) video reconstruction networks retrained with our
method, and up to 15% for optic flow networks. A challenge in evaluating event
based video reconstruction is lack of quality ground truth images in existing
datasets. To address this, we present a new High Quality Frames (HQF) dataset,
containing events and ground truth frames from a DAVIS240C that are
well-exposed and minimally motion-blurred. We evaluate our method on HQF +
several existing major event camera datasets.
Related papers
- EventHDR: from Event to High-Speed HDR Videos and Beyond [36.9225017403252]
We present a recurrent convolutional neural network that reconstructs high-speed HDR videos from event sequences.
We also develop a new optical system to collect a real-world dataset of paired high-speed HDR videos and event streams.
arXiv Detail & Related papers (2024-09-25T15:32:07Z) - Gradient events: improved acquisition of visual information in event cameras [0.0]
We propose a new type of event, the gradient event, which benefits from the same properties as a conventional brightness event.
We show that the gradient event -based video reconstruction outperforms existing state-of-the-art brightness event -based methods by a significant margin.
arXiv Detail & Related papers (2024-09-03T10:18:35Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks [16.432164340779266]
We propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction.
Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module.
arXiv Detail & Related papers (2023-05-10T18:00:06Z) - E2V-SDE: From Asynchronous Events to Fast and Continuous Video
Reconstruction via Neural Stochastic Differential Equations [23.866475611205736]
Event cameras respond to brightness changes in the scene asynchronously and independently for every pixel.
E2V-SDE can rapidly reconstruct images at arbitrary time steps and make realistic predictions on unseen data.
In terms of image quality, the LPIPS score improves by up to 12% and the reconstruction speed is 87% higher than that of ET-Net.
arXiv Detail & Related papers (2022-06-15T15:05:10Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.