Analyzing Diffusion as Serial Reproduction
- URL: http://arxiv.org/abs/2209.14821v1
- Date: Thu, 29 Sep 2022 14:35:28 GMT
- Title: Analyzing Diffusion as Serial Reproduction
- Authors: Raja Marjieh, Ilia Sucholutsky, Thomas A. Langlois, Nori Jacoby,
Thomas L. Griffiths
- Abstract summary: Diffusion models learn to synthesize samples by inverting a diffusion process that gradually maps data into noise.
Our work highlights how classic paradigms in cognitive science can shed light on state-of-the-art machine learning problems.
- Score: 12.389541192789167
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Diffusion models are a class of generative models that learn to synthesize
samples by inverting a diffusion process that gradually maps data into noise.
While these models have enjoyed great success recently, a full theoretical
understanding of their observed properties is still lacking, in particular,
their weak sensitivity to the choice of noise family and the role of adequate
scheduling of noise levels for good synthesis. By identifying a correspondence
between diffusion models and a well-known paradigm in cognitive science known
as serial reproduction, whereby human agents iteratively observe and reproduce
stimuli from memory, we show how the aforementioned properties of diffusion
models can be explained as a natural consequence of this correspondence. We
then complement our theoretical analysis with simulations that exhibit these
key features. Our work highlights how classic paradigms in cognitive science
can shed light on state-of-the-art machine learning problems.
Related papers
- Can Diffusion Models Disentangle? A Theoretical Perspective [52.360881354319986]
This paper presents a novel theoretical framework for understanding how diffusion models can learn disentangled representations.
We establish identifiability conditions for general disentangled latent variable models, analyze training dynamics, and derive sample complexity bounds for disentangled latent subspace models.
arXiv Detail & Related papers (2025-03-31T20:46:18Z) - How Diffusion Models Learn to Factorize and Compose [14.161975556325796]
Diffusion models are capable of generating photo-realistic images that combine elements which likely do not appear together in the training set.
We investigate whether and when diffusion models learn semantically meaningful and factorized representations of composable features.
arXiv Detail & Related papers (2024-08-23T17:59:03Z) - An Overview of Diffusion Models: Applications, Guided Generation, Statistical Rates and Optimization [59.63880337156392]
Diffusion models have achieved tremendous success in computer vision, audio, reinforcement learning, and computational biology.
Despite the significant empirical success, theory of diffusion models is very limited.
This paper provides a well-rounded theoretical exposure for stimulating forward-looking theories and methods of diffusion models.
arXiv Detail & Related papers (2024-04-11T14:07:25Z) - A Phase Transition in Diffusion Models Reveals the Hierarchical Nature
of Data [55.748186000425996]
Recent advancements show that diffusion models can generate high-quality images.
We study this phenomenon in a hierarchical generative model of data.
Our analysis characterises the relationship between time and scale in diffusion models.
arXiv Detail & Related papers (2024-02-26T19:52:33Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - On Memorization in Diffusion Models [46.656797890144105]
We show that memorization behaviors tend to occur on smaller-sized datasets.
We quantify the impact of the influential factors on these memorization behaviors in terms of effective model memorization (EMM)
Our study holds practical significance for diffusion model users and offers clues to theoretical research in deep generative models.
arXiv Detail & Related papers (2023-10-04T09:04:20Z) - Directional diffusion models for graph representation learning [9.457273750874357]
We propose a new class of models called it directional diffusion models
These models incorporate data-dependent, anisotropic, and directional noises in the forward diffusion process.
We conduct extensive experiments on 12 publicly available datasets, focusing on two distinct graph representation learning tasks.
arXiv Detail & Related papers (2023-06-22T21:27:48Z) - Diffusion Models in Vision: A Survey [80.82832715884597]
A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage.
Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens.
arXiv Detail & Related papers (2022-09-10T22:00:30Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z) - Conditional Diffusion Probabilistic Model for Speech Enhancement [101.4893074984667]
We propose a novel speech enhancement algorithm that incorporates characteristics of the observed noisy speech signal into the diffusion and reverse processes.
In our experiments, we demonstrate strong performance of the proposed approach compared to representative generative models.
arXiv Detail & Related papers (2022-02-10T18:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.