RRNet: Configurable Real-Time Video Enhancement with Arbitrary Local Lighting Variations
- URL: http://arxiv.org/abs/2601.01865v1
- Date: Mon, 05 Jan 2026 07:50:59 GMT
- Title: RRNet: Configurable Real-Time Video Enhancement with Arbitrary Local Lighting Variations
- Authors: Wenlong Yang, Canran Jin, Weihang Yuan, Chao Wang, Lifeng Sun,
- Abstract summary: We introduce RRNet, a framework that achieves a state-of-the-art tradeoff between visual quality and efficiency.<n> RRNet enables localized relighting through a depth-aware rendering module without requiring pixel-aligned training data.<n>Experiments show that RRNet consistently outperforms prior methods in low-light enhancement, localized illumination adjustment, and glare removal.
- Score: 7.360594425594612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing demand for real-time video enhancement in live applications, existing methods often struggle to balance speed and effective exposure control, particularly under uneven lighting. We introduce RRNet (Rendering Relighting Network), a lightweight and configurable framework that achieves a state-of-the-art tradeoff between visual quality and efficiency. By estimating parameters for a minimal set of virtual light sources, RRNet enables localized relighting through a depth-aware rendering module without requiring pixel-aligned training data. This object-aware formulation preserves facial identity and supports real-time, high-resolution performance using a streamlined encoder and lightweight prediction head. To facilitate training, we propose a generative AI-based dataset creation pipeline that synthesizes diverse lighting conditions at low cost. With its interpretable lighting control and efficient architecture, RRNet is well suited for practical applications such as video conferencing, AR-based portrait enhancement, and mobile photography. Experiments show that RRNet consistently outperforms prior methods in low-light enhancement, localized illumination adjustment, and glare removal.
Related papers
- LightQANet: Quantized and Adaptive Feature Learning for Low-Light Image Enhancement [65.06462316546806]
Low-light image enhancement aims to improve illumination while preserving high-quality color and texture.<n>Existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions.<n>We propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement.
arXiv Detail & Related papers (2025-10-16T14:54:42Z) - LuxDiT: Lighting Estimation with Video Diffusion Transformer [66.60450792095901]
Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics.<n>We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input.
arXiv Detail & Related papers (2025-09-03T19:59:20Z) - Low-Light Enhancement via Encoder-Decoder Network with Illumination Guidance [0.0]
This paper introduces a novel deep learning framework for low-light image enhancement, named the.<n>the-Decoder Network with Illumination Guidance (EDNIG)<n>EDNIG integrates an illumination map, derived from Bright Channel Prior (BCP), as a guidance input.<n>It is optimized within a Generative Adversarial Network (GAN) framework using a composite loss function that combines adversarial loss, pixel-wise mean squared error (MSE), and perceptual loss.
arXiv Detail & Related papers (2025-07-04T09:35:00Z) - Low-Light Image Enhancement using Event-Based Illumination Estimation [83.81648559951684]
Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.<n>This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.<n>We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
arXiv Detail & Related papers (2025-04-13T00:01:33Z) - LUMINA-Net: Low-light Upgrade through Multi-stage Illumination and Noise Adaptation Network for Image Enhancement [26.585985828583304]
Low-light image enhancement (LLIE) is a crucial task in computer vision aimed at enhancing the visual fidelity of images captured under low-illumination conditions.<n>We propose LUMINA-Net, an unsupervised deep learning framework that learns adaptive priors from low-light image pairs by integrating multi-stage illumination and reflectance modules.
arXiv Detail & Related papers (2025-02-21T03:37:58Z) - ALEN: A Dual-Approach for Uniform and Non-Uniform Low-Light Image Enhancement [10.957431540794836]
Inadequate illumination can lead to significant information loss and poor image quality, impacting various applications such as surveillance.<n>Current enhancement techniques often use specific datasets to enhance low-light images, but still present challenges when adapting to diverse real-world conditions.<n>The Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
arXiv Detail & Related papers (2024-07-29T05:19:23Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.<n>Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.<n>We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Neural Video Portrait Relighting in Real-time via Consistency Modeling [41.04622998356025]
We propose a neural approach for real-time, high-quality and coherent video portrait relighting.
We propose a hybrid structure and lighting disentanglement in an encoder-decoder architecture.
We also propose a lighting sampling strategy to model the illumination consistency and mutation for natural portrait light manipulation in real-world.
arXiv Detail & Related papers (2021-04-01T14:13:28Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.