Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model
- URL: http://arxiv.org/abs/2311.11638v2
- Date: Sat, 9 Mar 2024 07:59:41 GMT
- Title: Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model
- Authors: Chunming He, Chengyu Fang, Yulun Zhang, Tian Ye, Kai Li, Longxiang
Tang, Zhenhua Guo, Xiu Li, Sina Farsiu
- Abstract summary: Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
- Score: 59.08821399652483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Illumination degradation image restoration (IDIR) techniques aim to improve
the visibility of degraded images and mitigate the adverse effects of
deteriorated illumination. Among these algorithms, diffusion model (DM)-based
methods have shown promising performance but are often burdened by heavy
computational demands and pixel misalignment issues when predicting the
image-level distribution. To tackle these problems, we propose to leverage DM
within a compact latent space to generate concise guidance priors and introduce
a novel solution called Reti-Diff for the IDIR task. Reti-Diff comprises two
key components: the Retinex-based latent DM (RLDM) and the Retinex-guided
transformer (RGformer). To ensure detailed reconstruction and illumination
correction, RLDM is empowered to acquire Retinex knowledge and extract
reflectance and illumination priors. These priors are subsequently utilized by
RGformer to guide the decomposition of image features into their respective
reflectance and illumination components. Following this, RGformer further
enhances and consolidates the decomposed features, resulting in the production
of refined images with consistent content and robustness to handle complex
degradation scenarios. Extensive experiments show that Reti-Diff outperforms
existing methods on three IDIR tasks, as well as downstream applications. Code
will be available at \url{https://github.com/ChunmingHe/Reti-Diff}.
Related papers
- Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - BlindDiff: Empowering Degradation Modelling in Diffusion Models for Blind Image Super-Resolution [52.47005445345593]
BlindDiff is a DM-based blind SR method to tackle the blind degradation settings in SISR.
BlindDiff seamlessly integrates the MAP-based optimization into DMs.
Experiments on both synthetic and real-world datasets show that BlindDiff achieves the state-of-the-art performance.
arXiv Detail & Related papers (2024-03-15T11:21:34Z) - HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved
Diffusion Models [38.74983301496911]
Hyperspectral image (HSI) restoration aims at recovering clean images from degraded observations.
Existing model-based methods have limitations in accurately modeling the complex image characteristics.
This paper proposes an unsupervised HSI restoration framework with pre-trained diffusion model (HIR-Diff)
arXiv Detail & Related papers (2024-02-24T17:15:05Z) - Latent Diffusion Prior Enhanced Deep Unfolding for Snapshot Spectral Compressive Imaging [17.511583657111792]
Snapshot spectral imaging reconstruction aims to reconstruct three-dimensional spatial-spectral images from a single-shot two-dimensional compressed measurement.
We introduce a generative model, namely the latent diffusion model (LDM), to generate degradation-free prior to deep unfolding method.
arXiv Detail & Related papers (2023-11-24T04:55:20Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z) - Diff-Retinex: Rethinking Low-light Image Enhancement with A Generative
Diffusion Model [28.762205397922294]
We propose a physically explainable and generative diffusion model for low-light image enhancement, termed as Diff-Retinex.
In the Retinex decomposition, we integrate the superiority of attention in Transformer to decompose the image into illumination and reflectance maps.
Then, we design multi-path generative diffusion networks to reconstruct the normal-light Retinex probability distribution and solve the various degradations in these components respectively.
arXiv Detail & Related papers (2023-08-25T04:03:41Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.