Understanding Hallucinations in Diffusion Models through Mode Interpolation
- URL: http://arxiv.org/abs/2406.09358v2
- Date: Sun, 25 Aug 2024 13:41:50 GMT
- Title: Understanding Hallucinations in Diffusion Models through Mode Interpolation
- Authors: Sumukh K Aithal, Pratyush Maini, Zachary C. Lipton, J. Zico Kolter,
- Abstract summary: We study a particular failure mode in diffusion models, which we term mode mode.
We find that diffusion models smoothly "interpolate" between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution.
We show how hallucination leads to the generation of combinations of shapes that never existed.
- Score: 89.10226585746848
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Colloquially speaking, image generation models based upon diffusion processes are frequently said to exhibit "hallucinations," samples that could never occur in the training data. But where do such hallucinations come from? In this paper, we study a particular failure mode in diffusion models, which we term mode interpolation. Specifically, we find that diffusion models smoothly "interpolate" between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution; this phenomenon leads diffusion models to generate artifacts that never existed in real data (i.e., hallucinations). We systematically study the reasons for, and the manifestation of this phenomenon. Through experiments on 1D and 2D Gaussians, we show how a discontinuous loss landscape in the diffusion model's decoder leads to a region where any smooth approximation will cause such hallucinations. Through experiments on artificial datasets with various shapes, we show how hallucination leads to the generation of combinations of shapes that never existed. Finally, we show that diffusion models in fact know when they go out of support and hallucinate. This is captured by the high variance in the trajectory of the generated sample towards the final few backward sampling process. Using a simple metric to capture this variance, we can remove over 95% of hallucinations at generation time while retaining 96% of in-support samples. We conclude our exploration by showing the implications of such hallucination (and its removal) on the collapse (and stabilization) of recursive training on synthetic data with experiments on MNIST and 2D Gaussians dataset. We release our code at https://github.com/locuslab/diffusion-model-hallucination.
Related papers
- Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias [76.85949078144098]
This paper focuses on textual hallucinations, where diffusion models correctly generate individual symbols but assemble them in a nonsensical manner.
We observe that such phenomenon is attributed it to the network's local generation bias.
We also theoretically analyze the training dynamics for a specific case involving a two-layer learning parity points on a hypercube.
arXiv Detail & Related papers (2025-03-05T15:28:50Z) - Enhancing Hallucination Detection through Noise Injection [9.582929634879932]
Large Language Models (LLMs) are prone to generating plausible yet incorrect responses, known as hallucinations.
We show that detection can be improved significantly by taking into account model uncertainty in the Bayesian sense.
We propose a very simple and efficient approach that perturbs an appropriate subset of model parameters, or equivalently hidden unit activations, during sampling.
arXiv Detail & Related papers (2025-02-06T06:02:20Z) - On the Relation Between Linear Diffusion and Power Iteration [42.158089783398616]
We study the generation process as a correlation machine''
We show that low frequencies emerge earlier in the generation process, where the denoising basis vectors are more aligned to the true data with a rate depending on their eigenvalues.
This model allows us to show that the linear diffusion model converges in mean to the leading eigenvector of the underlying data, similarly to the prevalent power iteration method.
arXiv Detail & Related papers (2024-10-16T07:33:12Z) - Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Particle Denoising Diffusion Sampler [32.310922004771776]
Particle Denoising Diffusion Sampler (PDDS) provides consistent estimates under mild assumptions.
We demonstrate PDDS on multimodal and high dimensional sampling tasks.
arXiv Detail & Related papers (2024-02-09T11:01:35Z) - Lecture Notes in Probabilistic Diffusion Models [0.5361320134021585]
Diffusion models are loosely modelled based on non-equilibrium thermodynamics.
The diffusion model learns the data manifold to which the original and thus the reconstructed data samples belong.
Diffusion models have -- unlike variational autoencoder and flow models -- latent variables with the same dimensionality as the original data.
arXiv Detail & Related papers (2023-12-16T09:36:54Z) - HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data [102.56792377624927]
hallucinations inherent in machine-generated data remain under-explored.
We present a novel hallucination detection and elimination framework, HalluciDoctor, based on the cross-checking paradigm.
Our method successfully mitigates 44.6% hallucinations relatively and maintains competitive performance compared to LLaVA.
arXiv Detail & Related papers (2023-11-22T04:52:58Z) - Diffusion with Forward Models: Solving Stochastic Inverse Problems
Without Direct Supervision [76.32860119056964]
We propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed.
We demonstrate the effectiveness of our method on three challenging computer vision tasks.
arXiv Detail & Related papers (2023-06-20T17:53:00Z) - To smooth a cloud or to pin it down: Guarantees and Insights on Score Matching in Denoising Diffusion Models [20.315727650065007]
Denoising diffusion models are a class of generative models which have recently achieved state-of-the-art results across many domains.
We leverage known connections to control akin to the F"ollmer drift to extend established neural network approximation results for the F"ollmer drift to denoising diffusion models and samplers.
arXiv Detail & Related papers (2023-05-16T16:56:19Z) - Denoising Diffusion Samplers [41.796349001299156]
Denoising diffusion models are a popular class of generative models providing state-of-the-art results in many domains.
We explore a similar idea to sample approximately from unnormalized probability density functions and estimate their normalizing constants.
While score matching is not applicable in this context, we can leverage many of the ideas introduced in generative modeling for Monte Carlo sampling.
arXiv Detail & Related papers (2023-02-27T14:37:16Z) - Diffusion Models in Vision: A Survey [80.82832715884597]
A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage.
Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens.
arXiv Detail & Related papers (2022-09-10T22:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.