Semantically Contrastive Learning for Low-light Image Enhancement
- URL: http://arxiv.org/abs/2112.06451v1
- Date: Mon, 13 Dec 2021 07:08:33 GMT
- Title: Semantically Contrastive Learning for Low-light Image Enhancement
- Authors: Dong Liang, Ling Li, Mingqiang Wei, Shuo Yang, Liyan Zhang, Wenhan
Yang, Yun Du, Huiyu Zhou
- Abstract summary: Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images.
We propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE)
Our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets.
- Score: 48.71522073014808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLE) remains challenging due to the unfavorable
prevailing low-contrast and weak-visibility problems of single RGB images. In
this paper, we respond to the intriguing learning-related question -- if
leveraging both accessible unpaired over/underexposed images and high-level
semantic guidance, can improve the performance of cutting-edge LLE models?
Here, we propose an effective semantically contrastive learning paradigm for
LLE (namely SCL-LLE). Beyond the existing LLE wisdom, it casts the image
enhancement task as multi-task joint learning, where LLE is converted into
three constraints of contrastive learning, semantic brightness consistency, and
feature preservation for simultaneously ensuring the exposure, texture, and
color consistency. SCL-LLE allows the LLE model to learn from unpaired
positives (normal-light)/negatives (over/underexposed), and enables it to
interact with the scene semantics to regularize the image enhancement network,
yet the interaction of high-level semantic knowledge and the low-level signal
prior is seldom investigated in previous methods. Training on readily available
open data, extensive experiments demonstrate that our method surpasses the
state-of-the-arts LLE models over six independent cross-scenes datasets.
Moreover, SCL-LLE's potential to benefit the downstream semantic segmentation
under extremely dark conditions is discussed. Source Code:
https://github.com/LingLIx/SCL-LLE.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - DAP-LED: Learning Degradation-Aware Priors with CLIP for Joint Low-light Enhancement and Deblurring [14.003870853594972]
We propose a novel transformer-based joint learning framework, named DAP-LED.
It can jointly achieve low-light enhancement and deblurring, benefiting downstream tasks, such as depth estimation, segmentation, and detection in the dark.
The key insight is to leverage CLIP to adaptively learn the degradation levels from images at night.
arXiv Detail & Related papers (2024-09-20T13:37:53Z) - Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement [69.47143451986067]
Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions.
We propose a novel semantic-aware knowledge-guided framework that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
arXiv Detail & Related papers (2023-04-14T10:22:28Z) - Iterative Prompt Learning for Unsupervised Backlit Image Enhancement [86.90993077000789]
We propose a novel unsupervised backlit image enhancement method, abbreviated as CLIP-LIT.
We show that the open-world CLIP prior aids in distinguishing between backlit and well-lit images.
Our method alternates between updating the prompt learning framework and enhancement network until visually pleasing results are achieved.
arXiv Detail & Related papers (2023-03-30T17:37:14Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - Toward Fast, Flexible, and Robust Low-Light Image Enhancement [87.27326390675155]
We develop a new Self-Calibrated Illumination (SCI) learning framework for fast, flexible, and robust brightening images in real-world low-light scenarios.
Considering the computational burden of the cascaded pattern, we construct the self-calibrated module which realizes the convergence between results of each stage.
We make comprehensive explorations to SCI's inherent properties including operation-insensitive adaptability and model-irrelevant generality.
arXiv Detail & Related papers (2022-04-21T14:40:32Z) - ExCon: Explanation-driven Supervised Contrastive Learning for Image
Classification [12.109442912963969]
We propose to leverage saliency-based explanation methods to create content-preserving masked augmentations for contrastive learning.
Our novel explanation-driven supervised contrastive learning (ExCon) methodology critically serves the dual goals of encouraging nearby image embeddings to have similar content and explanation.
We demonstrate that ExCon outperforms vanilla supervised contrastive learning in terms of classification, explanation quality, adversarial robustness as well as calibration of probabilistic predictions of the model in the context of distributional shift.
arXiv Detail & Related papers (2021-11-28T23:15:26Z) - Enhance Images as You Like with Unpaired Learning [8.104571453311442]
We propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space.
Our network learns to generate a collection of enhanced images from a given input conditioned on various reference images.
Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets.
arXiv Detail & Related papers (2021-10-04T03:00:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.