Difflare: Removing Image Lens Flare with Latent Diffusion Model
- URL: http://arxiv.org/abs/2407.14746v1
- Date: Sat, 20 Jul 2024 04:36:39 GMT
- Title: Difflare: Removing Image Lens Flare with Latent Diffusion Model
- Authors: Tianwen Zhou, Qihao Duan, Zitong Yu,
- Abstract summary: We introduce Difflare, a novel approach designed for lens flare removal.
To leverage the generative prior learned by Pre-Trained Diffusion Models (PTDM), we introduce a trainable Structural Guidance Injection Module (SGIM)
To address information loss resulting from latent compression, we introduce an Adaptive Feature Fusion Module (AFFM)
- Score: 19.022105366814078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recovery of high-quality images from images corrupted by lens flare presents a significant challenge in low-level vision. Contemporary deep learning methods frequently entail training a lens flare removing model from scratch. However, these methods, despite their noticeable success, fail to utilize the generative prior learned by pre-trained models, resulting in unsatisfactory performance in lens flare removal. Furthermore, there are only few works considering the physical priors relevant to flare removal. To address these issues, we introduce Difflare, a novel approach designed for lens flare removal. To leverage the generative prior learned by Pre-Trained Diffusion Models (PTDM), we introduce a trainable Structural Guidance Injection Module (SGIM) aimed at guiding the restoration process with PTDM. Towards more efficient training, we employ Difflare in the latent space. To address information loss resulting from latent compression and the stochastic sampling process of PTDM, we introduce an Adaptive Feature Fusion Module (AFFM), which incorporates the Luminance Gradient Prior (LGP) of lens flare to dynamically regulate feature extraction. Extensive experiments demonstrate that our proposed Difflare achieves state-of-the-art performance in real-world lens flare removal, restoring images corrupted by flare with improved fidelity and perceptual quality. The codes will be released soon.
Related papers
- Disentangle Nighttime Lens Flares: Self-supervised Generation-based Lens Flare Removal [18.825840100537174]
Lens flares arise from light reflection and refraction within sensor arrays, whose diverse types include glow, veiling glare, reflective flare and so on.
Existing methods are specialized for one specific type only, and overlook the simultaneous occurrence of multiple typed lens flares.
We introduce a solution named Self-supervised Generation-based Lens Flare Removal Network (SGLFR-Net), which is self-supervised without pre-training.
arXiv Detail & Related papers (2025-02-15T08:04:38Z) - Learning Diffusion Model from Noisy Measurement using Principled Expectation-Maximization Method [9.173055778539641]
We propose a principled expectation-maximization (EM) framework that iteratively learns diffusion models from noisy data with arbitrary corruption types.
Our framework employs a plug-and-play Monte Carlo method to accurately estimate clean images from noisy measurements, followed by training the diffusion model using the reconstructed images.
arXiv Detail & Related papers (2024-10-15T03:54:59Z) - Bring the Power of Diffusion Model to Defect Detection [0.0]
diffusion probabilistic model (DDPM) is pre-trained to extract the features of denoising process to construct as a feature repository.
The queried latent features are reconstructed and filtered to obtain high-dimensional DDPM features.
Experiment results demonstrate that our method achieves competitive results on several industrial datasets.
arXiv Detail & Related papers (2024-08-25T14:28:49Z) - Ambient Diffusion Posterior Sampling: Solving Inverse Problems with
Diffusion Models trained on Corrupted Data [56.81246107125692]
Ambient Diffusion Posterior Sampling (A-DPS) is a generative model pre-trained on one type of corruption.
We show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance.
We extend the Ambient Diffusion framework to train MRI models with access only to Fourier subsampled multi-coil MRI measurements.
arXiv Detail & Related papers (2024-03-13T17:28:20Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Learning A Coarse-to-Fine Diffusion Transformer for Image Restoration [39.071637725773314]
We propose a coarse-to-fine diffusion Transformer (C2F-DFT) for image restoration.
C2F-DFT contains diffusion self-attention (DFSA) and diffusion feed-forward network (DFN)
In the coarse training stage, our C2F-DFT estimates noises and then generates the final clean image by a sampling algorithm.
arXiv Detail & Related papers (2023-08-17T01:59:59Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal [74.86415440438051]
We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
arXiv Detail & Related papers (2022-12-09T07:48:30Z) - AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models [64.24948495708337]
Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion.
Various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed.
Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images.
arXiv Detail & Related papers (2022-08-24T03:13:04Z) - How to Train Neural Networks for Flare Removal [45.51943926089249]
We train neural networks to remove lens flare for the first time.
Our data synthesis approach is critical for accurate flare removal.
Models trained with our technique generalize well to real lens flares across different scenes, lighting conditions, and cameras.
arXiv Detail & Related papers (2020-11-25T02:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.