Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement
- URL: http://arxiv.org/abs/2304.07039v1
- Date: Fri, 14 Apr 2023 10:22:28 GMT
- Title: Learning Semantic-Aware Knowledge Guidance for Low-Light Image
Enhancement
- Authors: Yuhui Wu, Chen Pan, Guoqing Wang, Yang Yang, Jiwei Wei, Chongyi Li,
Heng Tao Shen
- Abstract summary: Low-light image enhancement (LLIE) investigates how to improve illumination and produce normal-light images.
The majority of existing methods improve low-light images via a global and uniform manner, without taking into account the semantic information of different regions.
We propose a novel semantic-aware knowledge-guided framework that can assist a low-light enhancement model in learning rich and diverse priors encapsulated in a semantic segmentation model.
- Score: 69.47143451986067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light image enhancement (LLIE) investigates how to improve illumination
and produce normal-light images. The majority of existing methods improve
low-light images via a global and uniform manner, without taking into account
the semantic information of different regions. Without semantic priors, a
network may easily deviate from a region's original color. To address this
issue, we propose a novel semantic-aware knowledge-guided framework (SKF) that
can assist a low-light enhancement model in learning rich and diverse priors
encapsulated in a semantic segmentation model. We concentrate on incorporating
semantic knowledge from three key aspects: a semantic-aware embedding module
that wisely integrates semantic priors in feature representation space, a
semantic-guided color histogram loss that preserves color consistency of
various instances, and a semantic-guided adversarial loss that produces more
natural textures by semantic priors. Our SKF is appealing in acting as a
general framework in LLIE task. Extensive experiments show that models equipped
with the SKF significantly outperform the baselines on multiple datasets and
our SKF generalizes to different models and scenes well. The code is available
at Semantic-Aware-Low-Light-Image-Enhancement.
Related papers
- Natural Language Supervision for Low-light Image Enhancement [0.0]
We introduce a Natural Language Supervision (NLS) strategy, which learns feature maps from text corresponding to images.
We also design a Textual Guidance Conditioning Mechanism (TCM) that incorporates the connections between image regions and sentence words.
In order to effectively identify and merge features from various levels of image and textual information, we design an Information Fusion Attention (IFA) module.
arXiv Detail & Related papers (2025-01-11T13:53:10Z) - AFANet: Adaptive Frequency-Aware Network for Weakly-Supervised Few-Shot Semantic Segmentation [37.9826204492371]
Few-shot learning aims to recognize novel concepts by leveraging prior knowledge learned from a few samples.
We propose an adaptive frequency-aware network (AFANet) for weakly-supervised few-shot semantic segmentation.
arXiv Detail & Related papers (2024-12-23T14:20:07Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - PROMPT-IML: Image Manipulation Localization with Pre-trained Foundation
Models Through Prompt Tuning [35.39822183728463]
We present a novel Prompt-IML framework for detecting tampered images.
Humans tend to discern authenticity of an image based on semantic and high-frequency information.
Our model can achieve better performance on eight typical fake image datasets.
arXiv Detail & Related papers (2024-01-01T03:45:07Z) - CoSeR: Bridging Image and Language for Cognitive Super-Resolution [74.24752388179992]
We introduce the Cognitive Super-Resolution (CoSeR) framework, empowering SR models with the capacity to comprehend low-resolution images.
We achieve this by marrying image appearance and language understanding to generate a cognitive embedding.
To further improve image fidelity, we propose a novel condition injection scheme called "All-in-Attention"
arXiv Detail & Related papers (2023-11-27T16:33:29Z) - Edge Guided GANs with Multi-Scale Contrastive Learning for Semantic
Image Synthesis [139.2216271759332]
We propose a novel ECGAN for the challenging semantic image synthesis task.
The semantic labels do not provide detailed structural information, making it challenging to synthesize local details and structures.
The widely adopted CNN operations such as convolution, down-sampling, and normalization usually cause spatial resolution loss.
We propose a novel contrastive learning method, which aims to enforce pixel embeddings belonging to the same semantic class to generate more similar image content.
arXiv Detail & Related papers (2023-07-22T14:17:19Z) - Semantically Contrastive Learning for Low-light Image Enhancement [48.71522073014808]
Low-light image enhancement (LLE) remains challenging due to the unfavorable prevailing low-contrast and weak-visibility problems of single RGB images.
We propose an effective semantically contrastive learning paradigm for LLE (namely SCL-LLE)
Our method surpasses the state-of-the-arts LLE models over six independent cross-scenes datasets.
arXiv Detail & Related papers (2021-12-13T07:08:33Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.