Generalised Diffusion Probabilistic Scale-Spaces
- URL: http://arxiv.org/abs/2309.08511v2
- Date: Thu, 6 Jun 2024 14:56:47 GMT
- Title: Generalised Diffusion Probabilistic Scale-Spaces
- Authors: Pascal Peter,
- Abstract summary: Diffusion probabilistic models excel at sampling new images from learned distributions.
We propose a scale-space theory for diffusion probabilistic models.
We show conceptual and empirical connections to diffusion and osmosis filters.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion probabilistic models excel at sampling new images from learned distributions. Originally motivated by drift-diffusion concepts from physics, they apply image perturbations such as noise and blur in a forward process that results in a tractable probability distribution. A corresponding learned reverse process generates images and can be conditioned on side information, which leads to a wide variety of practical applications. Most of the research focus currently lies on practice-oriented extensions. In contrast, the theoretical background remains largely unexplored, in particular the relations to drift-diffusion. In order to shed light on these connections to classical image filtering, we propose a generalised scale-space theory for diffusion probabilistic models. Moreover, we show conceptual and empirical connections to diffusion and osmosis filters.
Related papers
- Can Diffusion Models Disentangle? A Theoretical Perspective [52.360881354319986]
This paper presents a novel theoretical framework for understanding how diffusion models can learn disentangled representations.
We establish identifiability conditions for general disentangled latent variable models, analyze training dynamics, and derive sample complexity bounds for disentangled latent subspace models.
arXiv Detail & Related papers (2025-03-31T20:46:18Z) - Zigzag Diffusion Sampling: Diffusion Models Can Self-Improve via Self-Reflection [28.82743020243849]
Existing text-to-image diffusion models often fail to maintain high image quality and high prompt-image alignment for challenging prompts.
We propose diffusion self-reflection that alternately performs denoising and inversion.
We derive Zigzag Diffusion Sampling (Z-Sampling), a novel self-reflection-based diffusion sampling method.
arXiv Detail & Related papers (2024-12-14T16:42:41Z) - Merging and Splitting Diffusion Paths for Semantically Coherent Panoramas [33.334956022229846]
We propose the Merge-Attend-Diffuse operator, which can be plugged into different types of pretrained diffusion models used in a joint diffusion setting.
Specifically, we merge the diffusion paths, reprogramming self- and cross-attention to operate on the aggregated latent space.
Our method maintains compatibility with the input prompt and visual quality of the generated images while increasing their semantic coherence.
arXiv Detail & Related papers (2024-08-28T09:22:32Z) - Toward a Diffusion-Based Generalist for Dense Vision Tasks [141.03236279493686]
Recent works have shown image itself can be used as a natural interface for general-purpose visual perception.
We propose to perform diffusion in pixel space and provide a recipe for finetuning pre-trained text-to-image diffusion models for dense vision tasks.
In experiments, we evaluate our method on four different types of tasks and show competitive performance to the other vision generalists.
arXiv Detail & Related papers (2024-06-29T17:57:22Z) - Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement [58.9768112704998]
Disentangled representation learning strives to extract the intrinsic factors within observed data.
We introduce a new perspective and framework, demonstrating that diffusion models with cross-attention can serve as a powerful inductive bias.
This is the first work to reveal the potent disentanglement capability of diffusion models with cross-attention, requiring no complex designs.
arXiv Detail & Related papers (2024-02-15T05:07:54Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - Generalised Scale-Space Properties for Probabilistic Diffusion Models [1.52292571922932]
We show that probabilistic diffusion models fulfil generalised scale-space properties on evolving probability distributions.
We discuss similarities and differences between interpretations of the physical core concept of drift-diffusion in the deep learning and model-based world.
arXiv Detail & Related papers (2023-03-14T13:41:28Z) - Diffusion Models Generate Images Like Painters: an Analytical Theory of Outline First, Details Later [1.8416014644193066]
We observe that the reverse diffusion process that underlies image generation has the following properties.
Individual trajectories tend to be low-dimensional and resemble 2D rotations'
We find that this solution accurately describes the initial phase of image generation for pretrained models.
arXiv Detail & Related papers (2023-03-04T20:08:57Z) - Bi-Noising Diffusion: Towards Conditional Diffusion Models with
Generative Restoration Priors [64.24948495708337]
We introduce a new method that brings predicted samples to the training data manifold using a pretrained unconditional diffusion model.
We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks.
arXiv Detail & Related papers (2022-12-14T17:26:35Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.