Low-Light Video Enhancement with Synthetic Event Guidance
- URL: http://arxiv.org/abs/2208.11014v1
- Date: Tue, 23 Aug 2022 14:58:29 GMT
- Title: Low-Light Video Enhancement with Synthetic Event Guidance
- Authors: Lin Liu and Junfeng An and Jianzhuang Liu and Shanxin Yuan and Xiangyu
Chen and Wengang Zhou and Houqiang Li and Yanfeng Wang and Qi Tian
- Abstract summary: We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
- Score: 188.7256236851872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Low-light video enhancement (LLVE) is an important yet challenging task with
many applications such as photographing and autonomous driving. Unlike single
image low-light enhancement, most LLVE methods utilize temporal information
from adjacent frames to restore the color and remove the noise of the target
frame. However, these algorithms, based on the framework of multi-frame
alignment and enhancement, may produce multi-frame fusion artifacts when
encountering extreme low light or fast motion. In this paper, inspired by the
low latency and high dynamic range of events, we use synthetic events from
multiple frames to guide the enhancement and restoration of low-light videos.
Our method contains three stages: 1) event synthesis and enhancement, 2) event
and image fusion, and 3) low-light enhancement. In this framework, we design
two novel modules (event-image fusion transform and event-guided dual branch)
for the second and third stages, respectively. Extensive experiments show that
our method outperforms existing low-light video or single image enhancement
approaches on both synthetic and real LLVE datasets.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Towards Real-world Event-guided Low-light Video Enhancement and Deblurring [39.942568142125126]
Event cameras have emerged as a promising solution for improving image quality in low-light environments.
We introduce an end-to-end framework to effectively handle these tasks.
Our framework incorporates a module to efficiently leverage temporal information from events and frames.
arXiv Detail & Related papers (2024-08-27T09:44:54Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT [120.39362661689333]
We present an improved version of Lumina-T2X, showcasing stronger generation performance with increased training and inference efficiency.
Thanks to these improvements, Lumina-Next not only improves the quality and efficiency of basic text-to-image generation but also demonstrates superior resolution extrapolation capabilities.
arXiv Detail & Related papers (2024-06-05T17:53:26Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - VJT: A Video Transformer on Joint Tasks of Deblurring, Low-light
Enhancement and Denoising [45.349350685858276]
Video restoration task aims to recover high-quality videos from low-quality observations.
Video often faces different types of degradation, such as blur, low light, and noise.
We propose an efficient end-to-end video transformer approach for the joint task of video deblurring, low-light enhancement, and denoising.
arXiv Detail & Related papers (2024-01-26T10:27:56Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.