Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments
- URL: http://arxiv.org/abs/2309.05267v1
- Date: Mon, 11 Sep 2023 06:55:32 GMT
- Title: Diving into Darkness: A Dual-Modulated Framework for High-Fidelity
Super-Resolution in Ultra-Dark Environments
- Authors: Jiaxin Gao, Ziyu Yue, Yaohua Liu, Sihan Xie, Xin Fan, Risheng Liu
- Abstract summary: This paper proposes a specialized dual-modulated learning framework that attempts to deeply dissect the nature of the low-light super-resolution task.
We develop Illuminance-Semantic Dual Modulation (ISDM) components to enhance feature-level preservation of illumination and color details.
Comprehensive experiments showcases the applicability and generalizability of our approach to diverse and challenging ultra-low-light conditions.
- Score: 51.58771256128329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Super-resolution tasks oriented to images captured in ultra-dark environments
is a practical yet challenging problem that has received little attention. Due
to uneven illumination and low signal-to-noise ratio in dark environments, a
multitude of problems such as lack of detail and color distortion may be
magnified in the super-resolution process compared to normal-lighting
environments. Consequently, conventional low-light enhancement or
super-resolution methods, whether applied individually or in a cascaded manner
for such problem, often encounter limitations in recovering luminance, color
fidelity, and intricate details. To conquer these issues, this paper proposes a
specialized dual-modulated learning framework that, for the first time,
attempts to deeply dissect the nature of the low-light super-resolution task.
Leveraging natural image color characteristics, we introduce a self-regularized
luminance constraint as a prior for addressing uneven lighting. Expanding on
this, we develop Illuminance-Semantic Dual Modulation (ISDM) components to
enhance feature-level preservation of illumination and color details. Besides,
instead of deploying naive up-sampling strategies, we design the
Resolution-Sensitive Merging Up-sampler (RSMU) module that brings together
different sampling modalities as substrates, effectively mitigating the
presence of artifacts and halos. Comprehensive experiments showcases the
applicability and generalizability of our approach to diverse and challenging
ultra-low-light conditions, outperforming state-of-the-art methods with a
notable improvement (i.e., $\uparrow$5\% in PSNR, and $\uparrow$43\% in LPIPS).
Especially noteworthy is the 19-fold increase in the RMSE score, underscoring
our method's exceptional generalization across different darkness levels. The
code will be available online upon publication of the paper.
Related papers
- Dual High-Order Total Variation Model for Underwater Image Restoration [13.789310785350484]
Underwater image enhancement and restoration (UIER) is one crucial mode to improve the visual quality of underwater images.
We propose an effective variational framework based on an extended underwater image formation model (UIFM)
In our proposed framework, the weight factors-based color compensation is combined with the color balance to compensate for the attenuated color channels and remove the color cast.
arXiv Detail & Related papers (2024-07-20T13:06:37Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - Revealing Shadows: Low-Light Image Enhancement Using Self-Calibrated
Illumination [4.913568097686369]
Self-Calibrated Illumination (SCI) is a strategy initially developed for RGB images.
We employ the SCI method to intensify and clarify details that are typically lost in low-light conditions.
This method of selective illumination enhancement leaves the color information intact, thus preserving the color integrity of the image.
arXiv Detail & Related papers (2023-12-23T08:49:19Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.