Learning Integrodifferential Models for Image Denoising
- URL: http://arxiv.org/abs/2010.10888v2
- Date: Mon, 17 May 2021 09:44:13 GMT
- Title: Learning Integrodifferential Models for Image Denoising
- Authors: Tobias Alt, Joachim Weickert
- Abstract summary: We introduce an integrodifferential extension of the edge-enhancing anisotropic diffusion model for image denoising.
By accumulating weighted structural information on multiple scales, our model is the first to create anisotropy through multiscale integration.
- Score: 14.404339094377319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an integrodifferential extension of the edge-enhancing
anisotropic diffusion model for image denoising. By accumulating weighted
structural information on multiple scales, our model is the first to create
anisotropy through multiscale integration. It follows the philosophy of
combining the advantages of model-based and data-driven approaches within
compact, insightful, and mathematically well-founded models with improved
performance. We explore trained results of scale-adaptive weighting and
contrast parameters to obtain an explicit modelling by smooth functions. This
leads to a transparent model with only three parameters, without significantly
decreasing its denoising performance. Experiments demonstrate that it
outperforms its diffusion-based predecessors. We show that both multiscale
information and anisotropy are crucial for its success.
Related papers
- Oscillation Inversion: Understand the structure of Large Flow Model through the Lens of Inversion Method [60.88467353578118]
We show that a fixed-point-inspired iterative approach to invert real-world images does not achieve convergence, instead oscillating between distinct clusters.
We introduce a simple and fast distribution transfer technique that facilitates image enhancement, stroke-based recoloring, as well as visual prompt-guided image editing.
arXiv Detail & Related papers (2024-11-17T17:45:37Z) - Rethinking Weight-Averaged Model-merging [15.2881959315021]
Weight-averaged model-merging has emerged as a powerful approach in deep learning, capable of enhancing model performance without fine-tuning or retraining.
We investigate this technique from three novel perspectives to provide deeper insights into how and why weight-averaged model-merging works.
Our findings shed light on the "black box" of weight-averaged model-merging, offering valuable insights and practical recommendations.
arXiv Detail & Related papers (2024-11-14T08:02:14Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Memory-Efficient Fine-Tuning for Quantized Diffusion Model [12.875837358532422]
We introduce TuneQDM, a memory-efficient fine-tuning method for quantized diffusion models.
Our method consistently outperforms the baseline in both single-/multi-subject generations.
arXiv Detail & Related papers (2024-01-09T03:42:08Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.