SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events
- URL: http://arxiv.org/abs/2502.21120v1
- Date: Fri, 28 Feb 2025 14:55:37 GMT
- Title: SEE: See Everything Every Time -- Adaptive Brightness Adjustment for Broad Light Range Images via Events
- Authors: Yunfan Lu, Xiaogang Xu, Hao Lu, Yanlin Qian, Pengteng Li, Huizai Yao, Bin Yang, Junyi Li, Qianyi Cai, Weiyu Guo, Hui Xiong,
- Abstract summary: Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras.<n>We propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions.<n>Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation.
- Score: 53.79905461386883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras, with a high dynamic range exceeding $120dB$, significantly outperform traditional embedded cameras, robustly recording detailed changing information under various lighting conditions, including both low- and high-light situations. However, recent research on utilizing event data has primarily focused on low-light image enhancement, neglecting image enhancement and brightness adjustment across a broader range of lighting conditions, such as normal or high illumination. Based on this, we propose a novel research question: how to employ events to enhance and adaptively adjust the brightness of images captured under broad lighting conditions? To investigate this question, we first collected a new dataset, SEE-600K, consisting of 610,126 images and corresponding events across 202 scenarios, each featuring an average of four lighting conditions with over a 1000-fold variation in illumination. Subsequently, we propose a framework that effectively utilizes events to smoothly adjust image brightness through the use of prompts. Our framework captures color through sensor patterns, uses cross-attention to model events as a brightness dictionary, and adjusts the image's dynamic range to form a broad light-range representation (BLR), which is then decoded at the pixel level based on the brightness prompt. Experimental results demonstrate that our method not only performs well on the low-light enhancement dataset but also shows robust performance on broader light-range image enhancement using the SEE-600K dataset. Additionally, our approach enables pixel-level brightness adjustment, providing flexibility for post-processing and inspiring more imaging applications. The dataset and source code are publicly available at:https://github.com/yunfanLu/SEE.
Related papers
- Low-Light Image Enhancement using Event-Based Illumination Estimation [83.81648559951684]
Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.
This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.
We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
arXiv Detail & Related papers (2025-04-13T00:01:33Z) - Illuminating Darkness: Enhancing Real-world Low-light Scenes with Smartphone Images [47.39277249268179]
This dataset comprises 6,425 unique focus-aligned image pairs captured with smartphone sensors in dynamic settings under challenging lighting conditions (0.1--200 lux)
We extracted and refined around 180,000 non-overlapping patches from 6,025 collected scenes for training while reserving 400 pairs for benchmarking.
In addition to that, we collected 2,117 low-light scenes from different sources for extensive real-world aesthetic evaluation.
arXiv Detail & Related papers (2025-03-10T04:01:56Z) - ALEN: A Dual-Approach for Uniform and Non-Uniform Low-Light Image Enhancement [10.957431540794836]
Inadequate illumination can lead to significant information loss and poor image quality, impacting various applications such as surveillance.<n>Current enhancement techniques often use specific datasets to enhance low-light images, but still present challenges when adapting to diverse real-world conditions.<n>The Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
arXiv Detail & Related papers (2024-07-29T05:19:23Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments [51.58771256128329]
This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
arXiv Detail & Related papers (2023-09-11T06:55:32Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.