Low-Light Video Enhancement with Synthetic Event Guidance
- URL: http://arxiv.org/abs/2208.11014v1
- Date: Tue, 23 Aug 2022 14:58:29 GMT
- Title: Low-Light Video Enhancement with Synthetic Event Guidance
- Authors: Lin Liu and Junfeng An and Jianzhuang Liu and Shanxin Yuan and Xiangyu
Chen and Wengang Zhou and Houqiang Li and Yanfeng Wang and Qi Tian
- Abstract summary: We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
- Score: 188.7256236851872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Low-light video enhancement (LLVE) is an important yet challenging task with
many applications such as photographing and autonomous driving. Unlike single
image low-light enhancement, most LLVE methods utilize temporal information
from adjacent frames to restore the color and remove the noise of the target
frame. However, these algorithms, based on the framework of multi-frame
alignment and enhancement, may produce multi-frame fusion artifacts when
encountering extreme low light or fast motion. In this paper, inspired by the
low latency and high dynamic range of events, we use synthetic events from
multiple frames to guide the enhancement and restoration of low-light videos.
Our method contains three stages: 1) event synthesis and enhancement, 2) event
and image fusion, and 3) low-light enhancement. In this framework, we design
two novel modules (event-image fusion transform and event-guided dual branch)
for the second and third stages, respectively. Extensive experiments show that
our method outperforms existing low-light video or single image enhancement
approaches on both synthetic and real LLVE datasets.
Related papers
- Light-A-Video: Training-free Video Relighting via Progressive Light Fusion [52.420894727186216]
Light-A-Video is a training-free approach to achieve temporally smooth video relighting.
Adapted from image relighting models, Light-A-Video introduces two key techniques to enhance lighting consistency.
arXiv Detail & Related papers (2025-02-12T17:24:19Z) - Lumina-Video: Efficient and Flexible Video Generation with Multi-scale Next-DiT [98.56372305225271]
Lumina-Next achieves exceptional performance in the generation of images with Next-DiT.
Lumina-Video incorporates a Multi-scale Next-DiT architecture, which jointly learns multiple patchifications.
We propose Lumina-V2A, a video-to-audio model based on Next-DiT, to create synchronized sounds for generated videos.
arXiv Detail & Related papers (2025-02-10T18:58:11Z) - DLEN: Dual Branch of Transformer for Low-Light Image Enhancement in Dual Domains [0.0]
Low-light image enhancement (LLE) aims to improve the visual quality of images captured in poorly lit conditions.
These issues hinder the performance of computer vision tasks such as object detection, facial recognition, and autonomous driving.
We propose the Dual Light Enhance Network (DLEN), a novel architecture that incorporates two distinct attention mechanisms.
arXiv Detail & Related papers (2025-01-21T15:58:16Z) - Towards Real-world Event-guided Low-light Video Enhancement and Deblurring [39.942568142125126]
Event cameras have emerged as a promising solution for improving image quality in low-light environments.
We introduce an end-to-end framework to effectively handle these tasks.
Our framework incorporates a module to efficiently leverage temporal information from events and frames.
arXiv Detail & Related papers (2024-08-27T09:44:54Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT [120.39362661689333]
We present an improved version of Lumina-T2X, showcasing stronger generation performance with increased training and inference efficiency.
Thanks to these improvements, Lumina-Next not only improves the quality and efficiency of basic text-to-image generation but also demonstrates superior resolution extrapolation capabilities.
arXiv Detail & Related papers (2024-06-05T17:53:26Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Event-based Continuous Color Video Decompression from Single Frames [36.4263932473053]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image and an event camera stream.
Our approach combines continuous long-range motion modeling with a neural synthesis model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.