Latent Target Score Matching, with an application to Simulation-Based Inference
- URL: http://arxiv.org/abs/2602.07189v1
- Date: Fri, 06 Feb 2026 20:45:29 GMT
- Title: Latent Target Score Matching, with an application to Simulation-Based Inference
- Authors: Joohwan Ko, Tomas Geffner,
- Abstract summary: Latent Target Score Matching (LTSM) is an extension of Target Score Matching (TSM)<n>LTSM consistently improves variance, score accuracy, and sample quality across simulation-based inference tasks.
- Score: 13.204127482928143
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Denoising score matching (DSM) for training diffusion models may suffer from high variance at low noise levels. Target Score Matching (TSM) mitigates this when clean data scores are available, providing a low-variance objective. In many applications clean scores are inaccessible due to the presence of latent variables, leaving only joint signals exposed. We propose Latent Target Score Matching (LTSM), an extension of TSM to leverage joint scores for low-variance supervision of the marginal score. While LTSM is effective at low noise levels, a mixture with DSM ensures robustness across noise scales. Across simulation-based inference tasks, LTSM consistently improves variance, score accuracy, and sample quality.
Related papers
- Combating Noisy Labels through Fostering Self- and Neighbor-Consistency [120.4394402099635]
Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning.<n>We propose a noise-robust method named Jo-SNC (textbfJoint sample selection and model regularization based on textbfSelf- and textbfNeighbor-textbfConsistency)<n>We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds.
arXiv Detail & Related papers (2026-01-19T07:55:29Z) - SEE: Signal Embedding Energy for Quantifying Noise Interference in Large Audio Language Models [49.313324100819955]
Signal Embedding Energy (SEE) is a method for quantifying the impact of noise intensity on LALM inputs.<n>SEE exhibits a strong correlation with LALM performance, achieving a correlation of 0.98.<n>This paper introduces a novel metric for noise quantification in LALMs, providing guidance for robustness improvements in real-world deployments.
arXiv Detail & Related papers (2026-01-12T08:57:55Z) - Variance-Reduced Diffusion Sampling via Target Score Identity [0.0]
We study variance reduction for score estimation and diffusion-based sampling in settings where the clean (target) score is available or can be approximated.<n>We develop a plug-and-play nonparametric self-normalized importance sampling estimator compatible with standard reverse-time solvers.<n> Experiments on synthetic targets and PDE-governed inverse problems demonstrate improved sample quality for a fixed simulation budget.
arXiv Detail & Related papers (2026-01-04T16:46:31Z) - Control Variate Score Matching for Diffusion Models [34.30408848157335]
We introduce the Control Variate Score Identity (CVSI), deriving an optimal, time-dependent control coefficient that theoretically guarantees variance minimization across the entire noise spectrum.<n>We demonstrate that CVSI serves as a robust, low-variance plug-in estimator that significantly enhances sample efficiency in both data-free sampler learning and inference-time diffusion sampling.
arXiv Detail & Related papers (2025-12-23T02:55:14Z) - DynaNoise: Dynamic Probabilistic Noise Injection for Defending Against Membership Inference Attacks [6.610581923321801]
Membership Inference Attacks (MIAs) pose a significant risk to the privacy of training datasets.<n>Traditional mitigation techniques rely on injecting a fixed amount of noise during training or inference.<n>We present DynaNoise, an adaptive approach that dynamically modulates noise injection based on query sensitivity.
arXiv Detail & Related papers (2025-05-19T17:07:00Z) - Disentangled Noisy Correspondence Learning [56.06801962154915]
Cross-modal retrieval is crucial in understanding latent correspondences across modalities.
DisNCL is a novel information-theoretic framework for feature Disentanglement in Noisy Correspondence Learning.
arXiv Detail & Related papers (2024-08-10T09:49:55Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the
Underlying Score Fokker-Planck Equation [72.19198763459448]
We learn a family of noise-conditional score functions corresponding to the data density perturbed with increasingly large amounts of noise.
These perturbed data densities are linked together by the Fokker-Planck equation (FPE), a partial differential equation (PDE) governing the spatial-temporal evolution of a density.
We derive a corresponding equation called the score FPE that characterizes the noise-conditional scores of the perturbed data densities.
arXiv Detail & Related papers (2022-10-09T16:27:25Z) - Denoising Likelihood Score Matching for Conditional Score-based Data
Generation [22.751924447125955]
We propose a novel training objective called Denoising Likelihood Score Matching (DLSM) loss to match the gradients of the true log likelihood density.
Our experimental evidence shows that the proposed method outperforms the previous methods noticeably in terms of several key evaluation metrics.
arXiv Detail & Related papers (2022-03-27T04:37:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.