Learning Integrodifferential Models for Image Denoising
- URL: http://arxiv.org/abs/2010.10888v2
- Date: Mon, 17 May 2021 09:44:13 GMT
- Title: Learning Integrodifferential Models for Image Denoising
- Authors: Tobias Alt, Joachim Weickert
- Abstract summary: We introduce an integrodifferential extension of the edge-enhancing anisotropic diffusion model for image denoising.
By accumulating weighted structural information on multiple scales, our model is the first to create anisotropy through multiscale integration.
- Score: 14.404339094377319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an integrodifferential extension of the edge-enhancing
anisotropic diffusion model for image denoising. By accumulating weighted
structural information on multiple scales, our model is the first to create
anisotropy through multiscale integration. It follows the philosophy of
combining the advantages of model-based and data-driven approaches within
compact, insightful, and mathematically well-founded models with improved
performance. We explore trained results of scale-adaptive weighting and
contrast parameters to obtain an explicit modelling by smooth functions. This
leads to a transparent model with only three parameters, without significantly
decreasing its denoising performance. Experiments demonstrate that it
outperforms its diffusion-based predecessors. We show that both multiscale
information and anisotropy are crucial for its success.
Related papers
- Diffusion Models Trained with Large Data Are Transferable Visual Models [49.84679952948808]
We show that it is possible to achieve remarkable transferable performance on fundamental vision perception tasks using a moderate amount of target data.
Results showcase the remarkable transferability of the backbone of diffusion models across diverse tasks and real-world datasets.
arXiv Detail & Related papers (2024-03-10T04:23:24Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Memory-Efficient Fine-Tuning for Quantized Diffusion Model [12.875837358532422]
We introduce TuneQDM, a memory-efficient fine-tuning method for quantized diffusion models.
Our method consistently outperforms the baseline in both single-/multi-subject generations.
arXiv Detail & Related papers (2024-01-09T03:42:08Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Directional diffusion models for graph representation learning [9.457273750874357]
We propose a new class of models called it directional diffusion models
These models incorporate data-dependent, anisotropic, and directional noises in the forward diffusion process.
We conduct extensive experiments on 12 publicly available datasets, focusing on two distinct graph representation learning tasks.
arXiv Detail & Related papers (2023-06-22T21:27:48Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [55.28436972267793]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - A Set-Theoretic Study of the Relationships of Image Models and Priors
for Restoration Problems [34.956580494340166]
We study how effective each image model is for image restoration.
We compare the denoising results which are consistent with our analysis.
On top of the model-based methods, we quantitatively demonstrate the image properties that are inexplicitly exploited by deep learning method.
arXiv Detail & Related papers (2020-03-29T09:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.