Low-Light Image Enhancement using Event-Based Illumination Estimation
- URL: http://arxiv.org/abs/2504.09379v1
- Date: Sun, 13 Apr 2025 00:01:33 GMT
- Title: Low-Light Image Enhancement using Event-Based Illumination Estimation
- Authors: Lei Sun, Yuhan Bao, Jiajun Zhai, Jingyun Liang, Yulun Zhang, Kaiwei Wang, Danda Pani Paudel, Luc Van Gool,
- Abstract summary: Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.<n>This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.<n>We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
- Score: 83.81648559951684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments. Prevalent event-based solutions primarily utilize events triggered by motion, i.e., ''motion events'' to strengthen only the edge texture, while leaving the high dynamic range and excellent low-light responsiveness of event cameras largely unexplored. This paper instead opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events, i.e., by converting the timestamps of events triggered by a transmittance modulation into brightness values. The resulting fine-grained illumination cues facilitate a more effective decomposition and enhancement of the reflectance component in low-light images through the proposed Illumination-aided Reflectance Enhancement module. Furthermore, the degradation model of temporal-mapping events under low-light condition is investigated for realistic training data synthesizing. To address the lack of datasets under this regime, we construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events. Extensive experiments across 5 synthetic datasets and our real-world EvLowLight dataset substantiate that the devised pipeline, dubbed RetinEV, excels in producing well-illuminated, high dynamic range images, outperforming previous state-of-the-art event-based methods by up to 6.62 dB, while maintaining an efficient inference speed of 35.6 frame-per-second on a 640X480 image.
Related papers
- SaENeRF: Suppressing Artifacts in Event-based Neural Radiance Fields [12.428456822446947]
Event cameras offer advantages such as low latency, low power consumption, low bandwidth, and high dynamic range.
Reconstructing geometrically consistent and photometrically accurate 3D representations from event data remains fundamentally challenging.
We present SaENeRF, a novel self-supervised framework that effectively suppresses artifacts and enables 3D-consistent, dense radiance, and photorealistic NeRF reconstruction of static scenes solely from event streams.
arXiv Detail & Related papers (2025-04-23T03:33:20Z) - EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting [21.46091843175779]
Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian)<n>EBAD-Gaussian reconstructs sharp 3D Gaussians from event streams and severely blurred images.<n>Experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction.
arXiv Detail & Related papers (2025-04-14T09:17:00Z) - SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events [53.79905461386883]
Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras.<n>We propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions.<n>Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation.
arXiv Detail & Related papers (2025-02-28T14:55:37Z) - Rethinking High-speed Image Reconstruction Framework with Spike Camera [48.627095354244204]
Spike cameras generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras.<n>We introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms.<n>Our experiments on real-world low-light datasets demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images.
arXiv Detail & Related papers (2025-01-08T13:00:17Z) - E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions.<n>We introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment [23.15130387716121]
We propose Bundle Adjustment for Deblurring Neural Radiance Fields (EBAD-NeRF) to jointly optimize the learnable poses and NeRF parameters.
EBAD-NeRF can obtain accurate camera trajectory during the exposure time and learn a sharper 3D representations compared to prior works.
arXiv Detail & Related papers (2024-06-20T14:33:51Z) - Event-assisted Low-Light Video Object Segmentation [47.28027938310957]
Event cameras offer promise in enhancing object visibility and aiding VOS methods under such low-light conditions.
This paper introduces a pioneering framework tailored for low-light VOS, leveraging event camera data to elevate segmentation accuracy.
arXiv Detail & Related papers (2024-04-02T13:41:22Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.