Implicit Neural Representation for Cooperative Low-light Image
Enhancement
- URL: http://arxiv.org/abs/2303.11722v3
- Date: Tue, 22 Aug 2023 11:01:57 GMT
- Title: Implicit Neural Representation for Cooperative Low-light Image
Enhancement
- Authors: Shuzhou Yang and Moxuan Ding and Yanmin Wu and Zihan Li and Jian Zhang
- Abstract summary: We propose an implicit Neural Representation method for Cooperative low-light image enhancement, dubbed NeRCo.
NeRCo unifies the diverse degradation factors of real-world scenes with a controllable fitting function, leading to better robustness.
For the output results, we introduce semantic-orientated supervision with priors from the pre-trained vision-language model.
- Score: 10.484180571326565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The following three factors restrict the application of existing low-light
image enhancement methods: unpredictable brightness degradation and noise,
inherent gap between metric-favorable and visual-friendly versions, and the
limited paired training data. To address these limitations, we propose an
implicit Neural Representation method for Cooperative low-light image
enhancement, dubbed NeRCo. It robustly recovers perceptual-friendly results in
an unsupervised manner. Concretely, NeRCo unifies the diverse degradation
factors of real-world scenes with a controllable fitting function, leading to
better robustness. In addition, for the output results, we introduce
semantic-orientated supervision with priors from the pre-trained
vision-language model. Instead of merely following reference images, it
encourages results to meet subjective expectations, finding more
visual-friendly solutions. Further, to ease the reliance on paired data and
reduce solution space, we develop a dual-closed-loop constrained enhancement
module. It is trained cooperatively with other affiliated modules in a
self-supervised manner. Finally, extensive experiments demonstrate the
robustness and superior effectiveness of our proposed NeRCo. Our code is
available at https://github.com/Ysz2022/NeRCo.
Related papers
- Unpaired Image Dehazing via Kolmogorov-Arnold Transformation of Latent Features [0.0]
This paper proposes an innovative framework for Unsupervised Image Dehazing via Kolmogorov-Arnold Transformation, UID-KAT.
The proposed UID-KAT framework is trained in an unsupervised setting to take advantage of the abundance of real-world data and address the challenge of preparing paired/clean images.
arXiv Detail & Related papers (2025-02-08T12:24:49Z) - Zero-Reference Lighting Estimation Diffusion Model for Low-Light Image Enhancement [2.9873893715462185]
We propose a novel zero-reference lighting estimation diffusion model for low-light image enhancement called Zero-LED.
It utilizes the stable convergence ability of diffusion models to bridge the gap between low-light domains and real normal-light domains.
It successfully alleviates the dependence on pairwise training data via zero-reference learning.
arXiv Detail & Related papers (2024-03-05T11:39:17Z) - Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - VIBR: Learning View-Invariant Value Functions for Robust Visual Control [3.2307366446033945]
VIBR (View-Invariant Bellman Residuals) is a method that combines multi-view training and invariant prediction to reduce out-of-distribution gap for RL based visuomotor control.
We show that VIBR outperforms existing methods on complex visuo-motor control environment with high visual perturbation.
arXiv Detail & Related papers (2023-06-14T14:37:34Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Retinex Image Enhancement Based on Sequential Decomposition With a
Plug-and-Play Framework [16.579397398441102]
We design a plug-and-play framework based on the Retinex theory for simultaneously image enhancement and noise removal.
Our framework outcompetes the state-of-the-art methods in both image enhancement and denoising.
arXiv Detail & Related papers (2022-10-11T13:29:10Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement [58.72667941107544]
A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
arXiv Detail & Related papers (2021-12-09T06:25:30Z) - Visual Alignment Constraint for Continuous Sign Language Recognition [74.26707067455837]
Vision-based Continuous Sign Language Recognition aims to recognize unsegmented gestures from image sequences.
In this work, we revisit the overfitting problem in recent CTC-based CSLR works and attribute it to the insufficient training of the feature extractor.
We propose a Visual Alignment Constraint (VAC) to enhance the feature extractor with more alignment supervision.
arXiv Detail & Related papers (2021-04-06T07:24:58Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z) - Semi-Supervised StyleGAN for Disentanglement Learning [79.01988132442064]
Current disentanglement methods face several inherent limitations.
We design new architectures and loss functions based on StyleGAN for semi-supervised high-resolution disentanglement learning.
arXiv Detail & Related papers (2020-03-06T22:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.