EvRWKV: A RWKV Framework for Effective Event-guided Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2507.03184v1
- Date: Tue, 01 Jul 2025 19:05:04 GMT
- Title: EvRWKV: A RWKV Framework for Effective Event-guided Low-Light Image Enhancement
- Authors: WenJie Cai, Qingguo Meng, Zhenyu Wang, Xingbo Dong, Zhe Jin,
- Abstract summary: EvRWKV is a novel framework that enables continuous cross-modal interaction through dual-domain processing.<n>We show that EvRWKV achieves state-of-the-art performance, effectively enhancing image quality by suppressing noise, restoring structural details, and improving visual clarity in challenging low-light conditions.
- Score: 10.556338127441167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capturing high-quality visual content under low-light conditions remains a challenging problem due to severe noise, motion blur, and underexposure, which degrade the performance of downstream applications. Traditional frame-based low-light enhancement methods often amplify noise or fail to preserve structural details, especially in real-world scenarios. Event cameras, offering high dynamic range and microsecond temporal resolution by asynchronously capturing brightness changes, emerge as promising alternatives for low-light imaging. However, existing event-image fusion methods suffer from simplistic fusion strategies and inadequate handling of spatial-temporal misalignment and noise. To address these challenges, we propose EvRWKV, a novel framework that enables continuous cross-modal interaction through dual-domain processing. Our approach incorporates a Cross-RWKV module, leveraging the Receptance Weighted Key Value (RWKV) architecture for fine-grained temporal and cross-modal fusion, and an Event Image Spectral Fusion Enhancer (EISFE) module, which jointly performs adaptive frequency-domain noise suppression and spatial-domain deformable convolution alignment. Extensive qualitative and quantitative evaluations on real-world low-light datasets(SDE, SDSD, RELED) demonstrate that EvRWKV achieves state-of-the-art performance, effectively enhancing image quality by suppressing noise, restoring structural details, and improving visual clarity in challenging low-light conditions.
Related papers
- Exploring Fourier Prior and Event Collaboration for Low-Light Image Enhancement [1.8724535169356553]
Event cameras provide performance gain for low-light image enhancement.<n>Currently, existing event-based methods feed a frame and events directly into a single model.<n>We propose a visibility restoration network with amplitude-phase entanglement.<n>In the second stage, a fusion strategy with dynamic alignment is proposed to mitigate the spatial mismatch.
arXiv Detail & Related papers (2025-08-01T04:25:00Z) - Bidirectional Image-Event Guided Low-Light Image Enhancement [18.482432245937247]
Under extreme low-light conditions, traditional frame-based cameras, due to their limited dynamic range and temporal resolution, face detail loss and motion blur in captured images.<n>To overcome this bottleneck, researchers have introduced event cameras and proposed event-guided low-light image enhancement algorithms.<n>However, these methods neglect the influence of global low-frequency noise caused by dynamic lighting conditions and local structural discontinuities in sparse event data.
arXiv Detail & Related papers (2025-06-06T14:28:17Z) - Structure-guided Diffusion Transformer for Low-Light Image Enhancement [19.90700077104533]
We introduce DiT into the low-light enhancement task and design a novel Structure-guided Diffusion Transformer based Low-light image enhancement framework.<n>We compress the feature through wavelet transform to improve the inference efficiency of the model and capture the multi-directional frequency band.<n>In addition, we propose a Structure-guided Attention Block (SAB) to pay more attention to texture-riched tokens and avoid interference from noisy areas in noise prediction.
arXiv Detail & Related papers (2025-04-21T12:30:01Z) - Rethinking High-speed Image Reconstruction Framework with Spike Camera [48.627095354244204]
Spike cameras generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras.<n>We introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms.<n>Our experiments on real-world low-light datasets demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images.
arXiv Detail & Related papers (2025-01-08T13:00:17Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.<n>We propose a novel Mamba-based method customized for low light RAW images, called RAWMamba, to effectively handle raw images with different CFAs.<n>By bridging demosaicing and denoising, better enhancement for low light RAW images is achieved.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition [52.89441679581216]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise.<n>We present an innovative video decomposition strategy that incorporates view-independent and view-dependent components.<n>Our framework consistently outperforms existing methods, establishing a new SOTA performance.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.<n>Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.<n>We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - Diffusion in the Dark: A Diffusion Model for Low-Light Text Recognition [78.50328335703914]
Diffusion in the Dark (DiD) is a diffusion model for low-light image reconstruction for text recognition.
We demonstrate that DiD, without any task-specific optimization, can outperform SOTA low-light methods in low-light text recognition on real images.
arXiv Detail & Related papers (2023-03-07T23:52:51Z) - INFWIDE: Image and Feature Space Wiener Deconvolution Network for
Non-blind Image Deblurring in Low-Light Conditions [32.35378513394865]
We propose a novel non-blind deblurring method dubbed image and feature space Wiener deconvolution network (INFWIDE)
INFWIDE removes noise and hallucinates saturated regions in the image space and suppresses ringing artifacts in the feature space.
Experiments on synthetic data and real data demonstrate the superior performance of the proposed approach.
arXiv Detail & Related papers (2022-07-17T15:22:31Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.