Difflare: Removing Image Lens Flare with Latent Diffusion Model
- URL: http://arxiv.org/abs/2407.14746v1
- Date: Sat, 20 Jul 2024 04:36:39 GMT
- Title: Difflare: Removing Image Lens Flare with Latent Diffusion Model
- Authors: Tianwen Zhou, Qihao Duan, Zitong Yu,
- Abstract summary: We introduce Difflare, a novel approach designed for lens flare removal.
To leverage the generative prior learned by Pre-Trained Diffusion Models (PTDM), we introduce a trainable Structural Guidance Injection Module (SGIM)
To address information loss resulting from latent compression, we introduce an Adaptive Feature Fusion Module (AFFM)
- Score: 19.022105366814078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recovery of high-quality images from images corrupted by lens flare presents a significant challenge in low-level vision. Contemporary deep learning methods frequently entail training a lens flare removing model from scratch. However, these methods, despite their noticeable success, fail to utilize the generative prior learned by pre-trained models, resulting in unsatisfactory performance in lens flare removal. Furthermore, there are only few works considering the physical priors relevant to flare removal. To address these issues, we introduce Difflare, a novel approach designed for lens flare removal. To leverage the generative prior learned by Pre-Trained Diffusion Models (PTDM), we introduce a trainable Structural Guidance Injection Module (SGIM) aimed at guiding the restoration process with PTDM. Towards more efficient training, we employ Difflare in the latent space. To address information loss resulting from latent compression and the stochastic sampling process of PTDM, we introduce an Adaptive Feature Fusion Module (AFFM), which incorporates the Luminance Gradient Prior (LGP) of lens flare to dynamically regulate feature extraction. Extensive experiments demonstrate that our proposed Difflare achieves state-of-the-art performance in real-world lens flare removal, restoring images corrupted by flare with improved fidelity and perceptual quality. The codes will be released soon.
Related papers
- Diffusion Image Prior [19.263005158979567]
We take inspiration from the Deep Image Prior (DIP)[16], since it can be used to remove artifacts without the need for an explicit degradation model.
We show that, the optimization process in DIIP first reconstructs a clean version of the image before eventually overfitting to the degraded input.
In light of this result, we propose a blind image restoration (IR) method based on early stopping, which does not require prior knowledge of the degradation model.
arXiv Detail & Related papers (2025-03-27T11:52:37Z) - A Simple Combination of Diffusion Models for Better Quality Trade-Offs in Image Denoising [43.44633086975204]
We propose an intuitive method for leveraging pretrained diffusion models.
We then introduce our proposed Linear Combination Diffusion Denoiser.
LCDD achieves state-of-the-art performance and offers controlled, well-behaved trade-offs.
arXiv Detail & Related papers (2025-03-18T19:02:19Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.
To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.
Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - One-for-More: Continual Diffusion Model for Anomaly Detection [61.12622458367425]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.
Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''
We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - Disentangle Nighttime Lens Flares: Self-supervised Generation-based Lens Flare Removal [18.825840100537174]
Lens flares arise from light reflection and refraction within sensor arrays, whose diverse types include glow, veiling glare, reflective flare and so on.
Existing methods are specialized for one specific type only, and overlook the simultaneous occurrence of multiple typed lens flares.
We introduce a solution named Self-supervised Generation-based Lens Flare Removal Network (SGLFR-Net), which is self-supervised without pre-training.
arXiv Detail & Related papers (2025-02-15T08:04:38Z) - Learning Diffusion Model from Noisy Measurement using Principled Expectation-Maximization Method [9.173055778539641]
We propose a principled expectation-maximization (EM) framework that iteratively learns diffusion models from noisy data with arbitrary corruption types.
Our framework employs a plug-and-play Monte Carlo method to accurately estimate clean images from noisy measurements, followed by training the diffusion model using the reconstructed images.
arXiv Detail & Related papers (2024-10-15T03:54:59Z) - Data-free Distillation with Degradation-prompt Diffusion for Multi-weather Image Restoration [29.731089599252954]
We propose a novel Data-free Distillation with Degradation-prompt Diffusion framework for multi-weather Image Restoration (D4IR)
It replaces GANs with pre-trained diffusion models to avoid model collapse and incorporates a degradation-aware prompt adapter.
Our proposal achieves comparable performance to the model distilled with original training data, and is even superior to other mainstream unsupervised methods.
arXiv Detail & Related papers (2024-09-05T12:07:17Z) - Bring the Power of Diffusion Model to Defect Detection [0.0]
diffusion probabilistic model (DDPM) is pre-trained to extract the features of denoising process to construct as a feature repository.
The queried latent features are reconstructed and filtered to obtain high-dimensional DDPM features.
Experiment results demonstrate that our method achieves competitive results on several industrial datasets.
arXiv Detail & Related papers (2024-08-25T14:28:49Z) - Ambient Diffusion Posterior Sampling: Solving Inverse Problems with
Diffusion Models trained on Corrupted Data [56.81246107125692]
Ambient Diffusion Posterior Sampling (A-DPS) is a generative model pre-trained on one type of corruption.
We show that A-DPS can sometimes outperform models trained on clean data for several image restoration tasks in both speed and performance.
We extend the Ambient Diffusion framework to train MRI models with access only to Fourier subsampled multi-coil MRI measurements.
arXiv Detail & Related papers (2024-03-13T17:28:20Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Learning A Coarse-to-Fine Diffusion Transformer for Image Restoration [39.071637725773314]
We propose a coarse-to-fine diffusion Transformer (C2F-DFT) for image restoration.
C2F-DFT contains diffusion self-attention (DFSA) and diffusion feed-forward network (DFN)
In the coarse training stage, our C2F-DFT estimates noises and then generates the final clean image by a sampling algorithm.
arXiv Detail & Related papers (2023-08-17T01:59:59Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal [74.86415440438051]
We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
arXiv Detail & Related papers (2022-12-09T07:48:30Z) - AT-DDPM: Restoring Faces degraded by Atmospheric Turbulence using
Denoising Diffusion Probabilistic Models [64.24948495708337]
Atmospheric turbulence causes significant degradation to image quality by introducing blur and geometric distortion.
Various deep learning-based single image atmospheric turbulence mitigation methods, including CNN-based and GAN inversion-based, have been proposed.
Denoising Diffusion Probabilistic Models (DDPMs) have recently gained some traction because of their stable training process and their ability to generate high quality images.
arXiv Detail & Related papers (2022-08-24T03:13:04Z) - How to Train Neural Networks for Flare Removal [45.51943926089249]
We train neural networks to remove lens flare for the first time.
Our data synthesis approach is critical for accurate flare removal.
Models trained with our technique generalize well to real lens flares across different scenes, lighting conditions, and cameras.
arXiv Detail & Related papers (2020-11-25T02:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.