HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
- URL: http://arxiv.org/abs/2305.06382v2
- Date: Tue, 20 Feb 2024 12:38:00 GMT
- Title: HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks
- Authors: Burak Ercan, Onur Eker, Canberk Saglam, Aykut Erdem, Erkut Erdem
- Abstract summary: We propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction.
Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module.
- Score: 16.432164340779266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-based cameras are becoming increasingly popular for their ability to
capture high-speed motion with low latency and high dynamic range. However,
generating videos from events remains challenging due to the highly sparse and
varying nature of event data. To address this, in this study, we propose
HyperE2VID, a dynamic neural network architecture for event-based video
reconstruction. Our approach uses hypernetworks to generate per-pixel adaptive
filters guided by a context fusion module that combines information from event
voxel grids and previously reconstructed intensity images. We also employ a
curriculum learning strategy to train the network more robustly. Our
comprehensive experimental evaluations across various benchmark datasets reveal
that HyperE2VID not only surpasses current state-of-the-art methods in terms of
reconstruction quality but also achieves this with fewer parameters, reduced
computational requirements, and accelerated inference times.
Related papers
- EventHDR: from Event to High-Speed HDR Videos and Beyond [36.9225017403252]
We present a recurrent convolutional neural network that reconstructs high-speed HDR videos from event sequences.
We also develop a new optical system to collect a real-world dataset of paired high-speed HDR videos and event streams.
arXiv Detail & Related papers (2024-09-25T15:32:07Z) - LaSe-E2V: Towards Language-guided Semantic-Aware Event-to-Video Reconstruction [8.163356555241322]
We propose a novel framework, called LaSe-E2V, that can achieve semantic-aware high-quality E2V reconstruction.
We first propose an Event-guided Spatiotemporal Attention (ESA) module to condition the event data to the denoising pipeline effectively.
We then introduce an event-aware mask loss to ensure temporal coherence and a noise strategy to enhance spatial consistency.
arXiv Detail & Related papers (2024-07-08T01:40:32Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view
Reconstruction [95.37644907940857]
We propose a fast neural surface reconstruction approach, called NeuS2.
NeuS2 achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality.
We extend our method for fast training of dynamic scenes, with a proposed incremental training strategy and a novel global transformation prediction component.
arXiv Detail & Related papers (2022-12-10T07:19:43Z) - Reducing the Sim-to-Real Gap for Event Cameras [64.89183456212069]
Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency.
Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events.
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing video reconstruction networks.
arXiv Detail & Related papers (2020-03-20T02:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.