Seeing Motion at Nighttime with an Event Camera
- URL: http://arxiv.org/abs/2404.11884v1
- Date: Thu, 18 Apr 2024 03:58:27 GMT
- Title: Seeing Motion at Nighttime with an Event Camera
- Authors: Haoyue Liu, Shihan Peng, Lin Zhu, Yi Chang, Hanyu Zhou, Luxin Yan,
- Abstract summary: Event cameras react to dynamic with higher temporal resolution (microsecond) and higher dynamic range (120dB)
We propose a nighttime event reconstruction network (NER-Net) which mainly includes a learnable event timestamps calibration module (LETC)
We construct a paired real-light event dataset (RLED) through a co-axial imaging, including 64,200 spatially and temporally aligned image GTs and low-light events.
- Score: 17.355331119296782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on a very challenging task: imaging at nighttime dynamic scenes. Most previous methods rely on the low-light enhancement of a conventional RGB camera. However, they would inevitably face a dilemma between the long exposure time of nighttime and the motion blur of dynamic scenes. Event cameras react to dynamic changes with higher temporal resolution (microsecond) and higher dynamic range (120dB), offering an alternative solution. In this work, we present a novel nighttime dynamic imaging method with an event camera. Specifically, we discover that the event at nighttime exhibits temporal trailing characteristics and spatial non-stationary distribution. Consequently, we propose a nighttime event reconstruction network (NER-Net) which mainly includes a learnable event timestamps calibration module (LETC) to align the temporal trailing events and a non-uniform illumination aware module (NIAM) to stabilize the spatiotemporal distribution of events. Moreover, we construct a paired real low-light event dataset (RLED) through a co-axial imaging system, including 64,200 spatially and temporally aligned image GTs and low-light events. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art methods in terms of visual quality and generalization ability on real-world nighttime datasets. The project are available at: https://github.com/Liu-haoyue/NER-Net.
Related papers
- EvenNICER-SLAM: Event-based Neural Implicit Encoding SLAM [69.83383687049994]
We propose EvenNICER-SLAM, a novel approach to dense visual simultaneous localization and mapping.
EvenNICER-SLAM incorporates event cameras that respond to intensity changes instead of absolute brightness.
Our results suggest the potential for event cameras to improve the robustness of dense SLAM systems against fast camera motion in real-world scenarios.
arXiv Detail & Related papers (2024-10-04T13:52:01Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Temporal-Mapping Photography for Event Cameras [5.838762448259289]
Event cameras capture brightness changes as a continuous stream of events'' rather than traditional intensity frames.
We realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Deformable Convolutions and LSTM-based Flexible Event Frame Fusion
Network for Motion Deblurring [7.187030024676791]
Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences.
While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output.
Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period.
It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene.
arXiv Detail & Related papers (2023-06-01T15:57:12Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - The Spatio-Temporal Poisson Point Process: A Simple Model for the
Alignment of Event Camera Data [19.73526916714181]
Event cameras provide a natural and data efficient representation of visual information.
We propose a new model of event data that captures its natural-temporal structure.
We show new state of the art accuracy for rotational velocity estimation on the DAVIS 240C dataset.
arXiv Detail & Related papers (2021-06-13T00:43:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.