Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2112.05147v1
- Date: Thu, 9 Dec 2021 06:25:30 GMT
- Title: Learning Deep Context-Sensitive Decomposition for Low-Light Image
Enhancement
- Authors: Long Ma, Risheng Liu, Jiaao Zhang, Xin Fan, Zhongxuan Luo
- Abstract summary: A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces.
We develop a new context-sensitive decomposition network architecture to exploit the scene-level contextual dependencies on spatial scales.
We develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels.
- Score: 58.72667941107544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enhancing the quality of low-light images plays a very important role in many
image processing and multimedia applications. In recent years, a variety of
deep learning techniques have been developed to address this challenging task.
A typical framework is to simultaneously estimate the illumination and
reflectance, but they disregard the scene-level contextual information
encapsulated in feature spaces, causing many unfavorable outcomes, e.g.,
details loss, color unsaturation, artifacts, and so on. To address these
issues, we develop a new context-sensitive decomposition network architecture
to exploit the scene-level contextual dependencies on spatial scales. More
concretely, we build a two-stream estimation mechanism including reflectance
and illumination estimation network. We design a novel context-sensitive
decomposition connection to bridge the two-stream mechanism by incorporating
the physical principle. The spatially-varying illumination guidance is further
constructed for achieving the edge-aware smoothness property of the
illumination component. According to different training patterns, we construct
CSDNet (paired supervision) and CSDGAN (unpaired supervision) to fully evaluate
our designed architecture. We test our method on seven testing benchmarks to
conduct plenty of analytical and evaluated experiments. Thanks to our designed
context-sensitive decomposition connection, we successfully realized excellent
enhanced results, which fully indicates our superiority against existing
state-of-the-art approaches. Finally, considering the practical needs for
high-efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing
the number of channels. Further, by sharing an encoder for these two
components, we obtain a more lightweight version (SLiteCSDNet for short).
SLiteCSDNet just contains 0.0301M parameters but achieves the almost same
performance as CSDNet.
Related papers
- Latent Disentanglement for Low Light Image Enhancement [4.527270266697463]
We propose a Latent Disentangle-based Enhancement Network (LDE-Net) for low light vision tasks.
The latent disentanglement module disentangles the input image in latent space such that no corruption remains in the disentangled Content and Illumination components.
For downstream tasks (e.g. nighttime UAV tracking and low-light object detection), we develop an effective light-weight enhancer based on the latent disentanglement framework.
arXiv Detail & Related papers (2024-08-12T15:54:46Z) - Fast Context-Based Low-Light Image Enhancement via Neural Implicit Representations [6.113035634680655]
Current deep learning-based low-light image enhancement methods often struggle with high-resolution images.
We introduce a novel approach termed CoLIE, which redefines the enhancement process through mapping the 2D coordinates of an underexposed image to its illumination component.
arXiv Detail & Related papers (2024-07-17T11:51:52Z) - Specularity Factorization for Low-Light Enhancement [2.7961648901433134]
We present a new additive image factorization technique that treats images to be composed of multiple latent components.
Our model-driven em RSFNet estimates these factors by unrolling the optimization into network layers.
The resultant factors are interpretable by design and can be fused for different image enhancement tasks via a network or combined directly by the user.
arXiv Detail & Related papers (2024-04-02T14:41:42Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Searching a Compact Architecture for Robust Multi-Exposure Image Fusion [55.37210629454589]
Two major stumbling blocks hinder the development, including pixel misalignment and inefficient inference.
This study introduces an architecture search-based paradigm incorporating self-alignment and detail repletion modules for robust multi-exposure image fusion.
The proposed method outperforms various competitive schemes, achieving a noteworthy 3.19% improvement in PSNR for general scenarios and an impressive 23.5% enhancement in misaligned scenarios.
arXiv Detail & Related papers (2023-05-20T17:01:52Z) - SCRNet: a Retinex Structure-based Low-light Enhancement Model Guided by
Spatial Consistency [22.54951703413469]
We present a novel low-light image enhancement model, termed Spatial Consistency Retinex Network (SCRNet)
Our proposed model incorporates three levels of consistency: channel level, semantic level, and texture level, inspired by the principle of spatial consistency.
Extensive evaluations on various low-light image datasets demonstrate that our proposed SCRNet outshines existing state-of-the-art methods.
arXiv Detail & Related papers (2023-05-14T03:32:19Z) - Learning Detail-Structure Alternative Optimization for Blind
Super-Resolution [69.11604249813304]
We propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR.
In our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures.
Our method achieves the state-of-the-art against existing methods.
arXiv Detail & Related papers (2022-12-03T14:44:17Z) - Deep Decomposition and Bilinear Pooling Network for Blind Night-Time
Image Quality Evaluation [46.828620017822644]
We propose a novel deep decomposition and bilinear pooling network (DDB-Net) to better address this issue.
The DDB-Net contains three modules, i.e., an image decomposition module, a feature encoding module, and a bilinear pooling module.
The superiority of the proposed DDB-Net is well validated by extensive experiments on two publicly available night-time image databases.
arXiv Detail & Related papers (2022-05-12T05:16:24Z) - Multi-Content Complementation Network for Salient Object Detection in
Optical Remote Sensing Images [108.79667788962425]
salient object detection in optical remote sensing images (RSI-SOD) remains to be a challenging emerging topic.
We propose a novel Multi-Content Complementation Network (MCCNet) to explore the complementarity of multiple content for RSI-SOD.
In MCCM, we consider multiple types of features that are critical to RSI-SOD, including foreground features, edge features, background features, and global image-level features.
arXiv Detail & Related papers (2021-12-02T04:46:40Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.