Laplacian Score Sharpening for Mitigating Hallucination in Diffusion Models
- URL: http://arxiv.org/abs/2511.07496v1
- Date: Wed, 12 Nov 2025 01:01:41 GMT
- Title: Laplacian Score Sharpening for Mitigating Hallucination in Diffusion Models
- Authors: Barath Chandran. C, Srinivas Anumasa, Dianbo Liu,
- Abstract summary: We propose a post-hoc adjustment to the score function during inference that leverages the Laplacian (or hallucination) of the score to reduce mode sharpness.<n>We show that this correction significantly reduces the rate of hallucinated samples across toy 1D/2D distributions and a high-dimensional image dataset.
- Score: 4.878587790802629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models, though successful, are known to suffer from hallucinations that create incoherent or unrealistic samples. Recent works have attributed this to the phenomenon of mode interpolation and score smoothening, but they lack a method to prevent their generation during sampling. In this paper, we propose a post-hoc adjustment to the score function during inference that leverages the Laplacian (or sharpness) of the score to reduce mode interpolation hallucination in unconditional diffusion models across 1D, 2D, and high-dimensional image data. We derive an efficient Laplacian approximation for higher dimensions using a finite-difference variant of the Hutchinson trace estimator. We show that this correction significantly reduces the rate of hallucinated samples across toy 1D/2D distributions and a high- dimensional image dataset. Furthermore, our analysis explores the relationship between the Laplacian and uncertainty in the score.
Related papers
- Score-based diffusion models for diffuse optical tomography with uncertainty quantification [0.8443238959374133]
We introduce a novel regularization approach that prevents overfitting of the score function by constructing a mixed score composed of a learned and a model-based component.<n>Experiments demonstrate that a data-driven prior distribution results in posterior samples with low variance, compared to classical model-based estimation.
arXiv Detail & Related papers (2026-02-03T12:14:07Z) - One-for-More: Continual Diffusion Model for Anomaly Detection [63.50488826645681]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.<n>Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''<n>We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - Enhancing Hallucination Detection through Noise Injection [9.582929634879932]
Large Language Models (LLMs) are prone to generating plausible yet incorrect responses, known as hallucinations.<n>We show that detection can be improved significantly by taking into account model uncertainty in the Bayesian sense.<n>We propose a very simple and efficient approach that perturbs an appropriate subset of model parameters, or equivalently hidden unit activations, during sampling.
arXiv Detail & Related papers (2025-02-06T06:02:20Z) - Understanding Hallucinations in Diffusion Models through Mode Interpolation [89.10226585746848]
We study a particular failure mode in diffusion models, which we term mode mode.
We find that diffusion models smoothly "interpolate" between nearby data modes in the training set, to generate samples that are completely outside the support of the original training distribution.
We show how hallucination leads to the generation of combinations of shapes that never existed.
arXiv Detail & Related papers (2024-06-13T17:43:41Z) - Projection Regret: Reducing Background Bias for Novelty Detection via
Diffusion Models [72.07462371883501]
We propose emphProjection Regret (PR), an efficient novelty detection method that mitigates the bias of non-semantic information.
PR computes the perceptual distance between the test image and its diffusion-based projection to detect abnormality.
Extensive experiments demonstrate that PR outperforms the prior art of generative-model-based novelty detection methods by a significant margin.
arXiv Detail & Related papers (2023-12-05T09:44:47Z) - Interpreting and Improving Diffusion Models from an Optimization Perspective [4.5993996573872185]
We use this observation to interpret denoising diffusion models as approximate gradient descent applied to the Euclidean distance function.
We propose a new gradient-estimation sampler, generalizing DDIM using insights from our theoretical results.
arXiv Detail & Related papers (2023-06-08T00:56:33Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - A Variational Perspective on Diffusion-Based Generative Models and Score
Matching [8.93483643820767]
We derive a variational framework for likelihood estimation for continuous-time generative diffusion.
We show that minimizing the score-matching loss is equivalent to maximizing a lower bound of the likelihood of the plug-in reverse SDE.
arXiv Detail & Related papers (2021-06-05T05:50:36Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.