VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook
- URL: http://arxiv.org/abs/2312.08606v2
- Date: Sat, 16 Dec 2023 07:45:12 GMT
- Title: VQCNIR: Clearer Night Image Restoration with Vector-Quantized Codebook
- Authors: Wenbin Zou, Hongxia Gao, Tian Ye, Liang Chen, Weipeng Yang, Shasha
Huang, Hongsheng Chen, Sixiang Chen
- Abstract summary: Night photography often struggles with challenges like low light and blurring, stemming from dark environments and prolonged exposures.
We believe in the strength of data-driven high-quality priors and strive to offer a reliable and consistent prior, circumventing the restrictions of manual priors.
We propose Clearer Night Image Restoration with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent restoration outcomes on real-world and synthetic benchmarks.
- Score: 16.20461368096512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Night photography often struggles with challenges like low light and
blurring, stemming from dark environments and prolonged exposures. Current
methods either disregard priors and directly fitting end-to-end networks,
leading to inconsistent illumination, or rely on unreliable handcrafted priors
to constrain the network, thereby bringing the greater error to the final
result. We believe in the strength of data-driven high-quality priors and
strive to offer a reliable and consistent prior, circumventing the restrictions
of manual priors. In this paper, we propose Clearer Night Image Restoration
with Vector-Quantized Codebook (VQCNIR) to achieve remarkable and consistent
restoration outcomes on real-world and synthetic benchmarks. To ensure the
faithful restoration of details and illumination, we propose the incorporation
of two essential modules: the Adaptive Illumination Enhancement Module (AIEM)
and the Deformable Bi-directional Cross-Attention (DBCA) module. The AIEM
leverages the inter-channel correlation of features to dynamically maintain
illumination consistency between degraded features and high-quality codebook
features. Meanwhile, the DBCA module effectively integrates texture and
structural information through bi-directional cross-attention and deformable
convolution, resulting in enhanced fine-grained detail and structural fidelity
across parallel decoders. Extensive experiments validate the remarkable
benefits of VQCNIR in enhancing image quality under low-light conditions,
showcasing its state-of-the-art performance on both synthetic and real-world
datasets. The code is available at https://github.com/AlexZou14/VQCNIR.
Related papers
- GLARE: Low Light Image Enhancement via Generative Latent Feature based Codebook Retrieval [80.96706764868898]
We present a new Low-light Image Enhancement (LLIE) network via Generative LAtent feature based codebook REtrieval (GLARE)
We develop a generative Invertible Latent Normalizing Flow (I-LNF) module to align the LL feature distribution to NL latent representations, guaranteeing the correct code retrieval in the codebook.
Experiments confirm the superior performance of GLARE on various benchmark datasets and real-world data.
arXiv Detail & Related papers (2024-07-17T09:40:15Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition [68.6707284662443]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise.
One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions.
We present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - CDAN: Convolutional dense attention-guided network for low-light image enhancement [2.2530496464901106]
Low-light images pose challenges of diminished clarity, muted colors, and reduced details.
This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images.
CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections.
arXiv Detail & Related papers (2023-08-24T16:22:05Z) - Dual Associated Encoder for Face Restoration [68.49568459672076]
We propose a novel dual-branch framework named DAEFR to restore facial details from low-quality (LQ) images.
Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs.
We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T17:58:33Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - RIDCP: Revitalizing Real Image Dehazing via High-Quality Codebook Priors [14.432465539590481]
Existing dehazing approaches struggle to process real-world hazy images owing to the lack of paired real data and robust priors.
We present a new paradigm for real image dehazing from the perspectives of synthesizing more realistic hazy data.
arXiv Detail & Related papers (2023-04-08T12:12:24Z) - Deep Decomposition and Bilinear Pooling Network for Blind Night-Time
Image Quality Evaluation [46.828620017822644]
We propose a novel deep decomposition and bilinear pooling network (DDB-Net) to better address this issue.
The DDB-Net contains three modules, i.e., an image decomposition module, a feature encoding module, and a bilinear pooling module.
The superiority of the proposed DDB-Net is well validated by extensive experiments on two publicly available night-time image databases.
arXiv Detail & Related papers (2022-05-12T05:16:24Z) - High-Fidelity GAN Inversion for Image Attribute Editing [61.966946442222735]
We present a novel high-fidelity generative adversarial network (GAN) inversion framework that enables attribute editing with image-specific details well-preserved.
With a low bit-rate latent code, previous works have difficulties in preserving high-fidelity details in reconstructed and edited images.
We propose a distortion consultation approach that employs a distortion map as a reference for high-fidelity reconstruction.
arXiv Detail & Related papers (2021-09-14T11:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.