Self-supervision via Controlled Transformation and Unpaired Self-conditioning for Low-light Image Enhancement
- URL: http://arxiv.org/abs/2503.00642v1
- Date: Sat, 01 Mar 2025 22:25:49 GMT
- Title: Self-supervision via Controlled Transformation and Unpaired Self-conditioning for Low-light Image Enhancement
- Authors: Aupendu Kar, Sobhan K. Dhara, Debashis Sen, Prabir K. Biswas,
- Abstract summary: Real-world low-light images captured by imaging devices suffer from poor visibility and require a domain-specific enhancement to produce artifact-free outputs.<n>We propose an unpaired low-light image enhancement network leveraging novel controlled transformation-based self-supervision and unpaired self-conditioning strategies.
- Score: 6.312266245317324
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Real-world low-light images captured by imaging devices suffer from poor visibility and require a domain-specific enhancement to produce artifact-free outputs that reveal details. In this paper, we propose an unpaired low-light image enhancement network leveraging novel controlled transformation-based self-supervision and unpaired self-conditioning strategies. The model determines the required degrees of enhancement at the input image pixels, which are learned from the unpaired low-lit and well-lit images without any direct supervision. The self-supervision is based on a controlled transformation of the input image and subsequent maintenance of its enhancement in spite of the transformation. The self-conditioning performs training of the model on unpaired images such that it does not enhance an already-enhanced image or a well-lit input image. The inherent noise in the input low-light images is handled by employing low gradient magnitude suppression in a detail-preserving manner. In addition, our noise handling is self-conditioned by preventing the denoising of noise-free well-lit images. The training based on low-light image enhancement-specific attributes allows our model to avoid paired supervision without compromising significantly in performance. While our proposed self-supervision aids consistent enhancement, our novel self-conditioning facilitates adequate enhancement. Extensive experiments on multiple standard datasets demonstrate that our model, in general, outperforms the state-of-the-art both quantitatively and subjectively. Ablation studies show the effectiveness of our self-supervision and self-conditioning strategies, and the related loss functions.
Related papers
- Learning Flow Fields in Attention for Controllable Person Image Generation [59.10843756343987]
Controllable person image generation aims to generate a person image conditioned on reference images.<n>We propose learning flow fields in attention (Leffa), which explicitly guides the target query to attend to the correct reference key.<n>Leffa achieves state-of-the-art performance in controlling appearance (virtual try-on) and pose (pose transfer), significantly reducing fine-grained detail distortion.
arXiv Detail & Related papers (2024-12-11T15:51:14Z) - Leveraging Content and Context Cues for Low-Light Image Enhancement [25.97198463881292]
Low-light conditions have an adverse impact on machine cognition, limiting the performance of computer vision systems in real life.<n>We propose to improve the existing zero-reference low-light enhancement by leveraging the CLIP model to capture image prior and for semantic guidance.<n>We experimentally show, that the proposed prior and semantic guidance help to improve the overall image contrast and hue, as well as improve background-foreground discrimination.
arXiv Detail & Related papers (2024-12-10T17:32:09Z) - Unsupervised Low Light Image Enhancement Using SNR-Aware Swin
Transformer [0.0]
Low-light image enhancement aims at improving brightness and contrast, and reducing noise that corrupts the visual quality.
We propose a dual-branch network based on Swin Transformer, guided by a signal-to-noise ratio prior map.
arXiv Detail & Related papers (2023-06-03T11:07:56Z) - Enhancement by Your Aesthetic: An Intelligible Unsupervised Personalized
Enhancer for Low-Light Images [67.14410374622699]
We propose an intelligible unsupervised personalized enhancer (iUPEnhancer) for low-light images.
The proposed iUP-Enhancer is trained with the guidance of these correlations and the corresponding unsupervised loss functions.
Experiments demonstrate that the proposed algorithm produces competitive qualitative and quantitative results.
arXiv Detail & Related papers (2022-07-15T07:16:10Z) - Cycle-Interactive Generative Adversarial Network for Robust Unsupervised
Low-Light Enhancement [109.335317310485]
Cycle-Interactive Generative Adversarial Network (CIGAN) is capable of not only better transferring illumination distributions between low/normal-light images but also manipulating detailed signals.
In particular, the proposed low-light guided transformation feed-forwards the features of low-light images from the generator of enhancement GAN into the generator of degradation GAN.
arXiv Detail & Related papers (2022-07-03T06:37:46Z) - Semi-supervised atmospheric component learning in low-light image
problem [0.0]
Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices.
This study presents a semi-supervised training method using no-reference image quality metrics for low-light image restoration.
arXiv Detail & Related papers (2022-04-15T17:06:33Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.