Seeing the Unseen: Zooming in the Dark with Event Cameras
- URL: http://arxiv.org/abs/2601.02206v1
- Date: Mon, 05 Jan 2026 15:31:07 GMT
- Title: Seeing the Unseen: Zooming in the Dark with Event Cameras
- Authors: Dachun Kai, Zeyu Xiao, Huyue Zhu, Jiaxiao Wang, Yueyi Zhang, Xiaoyan Sun,
- Abstract summary: Low-light video super-resolution (LVSR) aims to restore high-resolution videos from low-light, low-resolution (LR) inputs.<n>Existing LVSR methods often struggle to recover fine details due to limited contrast and insufficient high-frequency information.<n>We present RetinexEVSR, the first event-driven LVSR framework that leverages high-contrast event signals and Retinex-inspired priors to enhance video quality under low-light scenarios.
- Score: 36.50809482857401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses low-light video super-resolution (LVSR), aiming to restore high-resolution videos from low-light, low-resolution (LR) inputs. Existing LVSR methods often struggle to recover fine details due to limited contrast and insufficient high-frequency information. To overcome these challenges, we present RetinexEVSR, the first event-driven LVSR framework that leverages high-contrast event signals and Retinex-inspired priors to enhance video quality under low-light scenarios. Unlike previous approaches that directly fuse degraded signals, RetinexEVSR introduces a novel bidirectional cross-modal fusion strategy to extract and integrate meaningful cues from noisy event data and degraded RGB frames. Specifically, an illumination-guided event enhancement module is designed to progressively refine event features using illumination maps derived from the Retinex model, thereby suppressing low-light artifacts while preserving high-contrast details. Furthermore, we propose an event-guided reflectance enhancement module that utilizes the enhanced event features to dynamically recover reflectance details via a multi-scale fusion mechanism. Experimental results show that our RetinexEVSR achieves state-of-the-art performance on three datasets. Notably, on the SDSD benchmark, our method can get up to 2.95 dB gain while reducing runtime by 65% compared to prior event-based methods. Code: https://github.com/DachunKai/RetinexEVSR.
Related papers
- Bidirectional Image-Event Guided Fusion Framework for Low-Light Image Enhancement [24.5584423318892]
Under extreme low-light conditions, frame-based cameras suffer from severe detail loss due to limited dynamic range.<n>Recent studies have introduced event cameras for event-guided low-light image enhancement.<n>We propose BiLIE, a Bidirectional image-event guided fusion framework for Low-Light Image Enhancement.
arXiv Detail & Related papers (2025-06-06T14:28:17Z) - Event-Enhanced Blurry Video Super-Resolution [52.894824081586776]
We tackle the task of blurry video super-resolution (BVSR), aiming to generate high-resolution (HR) videos from low-resolution (LR) and blurry inputs.<n>Current BVSR methods often fail to restore sharp details at high resolutions, resulting in noticeable artifacts and jitter.<n>We introduce event signals into BVSR and propose a novel event-enhanced network, Ev-DeVSR.
arXiv Detail & Related papers (2025-04-17T15:55:41Z) - Low-Light Image Enhancement using Event-Based Illumination Estimation [83.81648559951684]
Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.<n>This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.<n>We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
arXiv Detail & Related papers (2025-04-13T00:01:33Z) - Learning to Robustly Reconstruct Low-light Dynamic Scenes from Spike Streams [28.258022350623023]
As a neuromorphic sensor, spike camera can generate continuous binary spike streams to capture per-pixel light intensity.
We propose a bidirectional recurrent-based reconstruction framework, including a Light-Robust Representation (LR-Rep) and a fusion module.
We have developed a reconstruction benchmark for high-speed low-light scenes.
arXiv Detail & Related papers (2024-01-19T03:01:07Z) - Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation [33.142262765252795]
Detectors trained on well-lit data exhibit significant performance degradation on low-light data due to low visibility.
We propose to boost low-light object detection with zero-shot day-night domain adaptation.
Our method generalizes a detector from well-lit scenarios to low-light ones without requiring real low-light data.
arXiv Detail & Related papers (2023-12-02T20:11:48Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - Low-Light Video Enhancement with Synthetic Event Guidance [188.7256236851872]
We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
arXiv Detail & Related papers (2022-08-23T14:58:29Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.