Control Variate Score Matching for Diffusion Models
- URL: http://arxiv.org/abs/2512.20003v1
- Date: Tue, 23 Dec 2025 02:55:14 GMT
- Title: Control Variate Score Matching for Diffusion Models
- Authors: Khaled Kahouli, Romuald Elie, Klaus-Robert Müller, Quentin Berthet, Oliver T. Unke, Arnaud Doucet,
- Abstract summary: We introduce the Control Variate Score Identity (CVSI), deriving an optimal, time-dependent control coefficient that theoretically guarantees variance minimization across the entire noise spectrum.<n>We demonstrate that CVSI serves as a robust, low-variance plug-in estimator that significantly enhances sample efficiency in both data-free sampler learning and inference-time diffusion sampling.
- Score: 34.30408848157335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models offer a robust framework for sampling from unnormalized probability densities, which requires accurately estimating the score of the noise-perturbed target distribution. While the standard Denoising Score Identity (DSI) relies on data samples, access to the target energy function enables an alternative formulation via the Target Score Identity (TSI). However, these estimators face a fundamental variance trade-off: DSI exhibits high variance in low-noise regimes, whereas TSI suffers from high variance at high noise levels. In this work, we reconcile these approaches by unifying both estimators within the principled framework of control variates. We introduce the Control Variate Score Identity (CVSI), deriving an optimal, time-dependent control coefficient that theoretically guarantees variance minimization across the entire noise spectrum. We demonstrate that CVSI serves as a robust, low-variance plug-in estimator that significantly enhances sample efficiency in both data-free sampler learning and inference-time diffusion sampling.
Related papers
- Latent Target Score Matching, with an application to Simulation-Based Inference [13.204127482928143]
Latent Target Score Matching (LTSM) is an extension of Target Score Matching (TSM)<n>LTSM consistently improves variance, score accuracy, and sample quality across simulation-based inference tasks.
arXiv Detail & Related papers (2026-02-06T20:45:29Z) - Combating Noisy Labels through Fostering Self- and Neighbor-Consistency [120.4394402099635]
Label noise is pervasive in various real-world scenarios, posing challenges in supervised deep learning.<n>We propose a noise-robust method named Jo-SNC (textbfJoint sample selection and model regularization based on textbfSelf- and textbfNeighbor-textbfConsistency)<n>We design a self-adaptive, data-driven thresholding scheme to adjust per-class selection thresholds.
arXiv Detail & Related papers (2026-01-19T07:55:29Z) - Variance-Reduced Diffusion Sampling via Target Score Identity [0.0]
We study variance reduction for score estimation and diffusion-based sampling in settings where the clean (target) score is available or can be approximated.<n>We develop a plug-and-play nonparametric self-normalized importance sampling estimator compatible with standard reverse-time solvers.<n> Experiments on synthetic targets and PDE-governed inverse problems demonstrate improved sample quality for a fixed simulation budget.
arXiv Detail & Related papers (2026-01-04T16:46:31Z) - Reliable Few-shot Learning under Dual Noises [166.53173694689693]
We propose DEnoised Task Adaptation (DETA++) for reliable few-shot learning.<n>DETA++ employs a memory bank to store and refine clean regions for each inner-task class, based on which a Local Nearestid (LocalNCC) is devised to yield noise-robust predictions on query samples.<n>Extensive experiments demonstrate the effectiveness and flexibility of DETA++.
arXiv Detail & Related papers (2025-06-19T14:05:57Z) - Noise Conditional Variational Score Distillation [60.38982038894823]
Noise Conditional Variational Score Distillation (NCVSD) is a novel method for distilling pretrained diffusion models into generative denoisers.<n>By integrating this insight into the Variational Score Distillation framework, we enable scalable learning of generative denoisers.
arXiv Detail & Related papers (2025-06-11T06:01:39Z) - TRUST: Test-time Resource Utilization for Superior Trustworthiness [15.031121920821109]
We propose a novel test-time optimization method that accounts for the impact of such noise to produce more reliable confidence estimates.<n>This score defines a monotonic subset-selection function, where population accuracy consistently increases as samples with lower scores are removed.
arXiv Detail & Related papers (2025-06-06T12:52:32Z) - Diffusion Sampling Path Tells More: An Efficient Plug-and-Play Strategy for Sample Filtering [18.543769006014383]
Diffusion models often exhibit inconsistent sample quality due to variations inherent in their sampling trajectories.<n>We introduce CFG-Rejection, an efficient, plug-and-play strategy that filters low-quality samples at an early stage of the denoising process.<n>We validate the effectiveness of CFG-Rejection in image generation through extensive experiments.
arXiv Detail & Related papers (2025-05-29T11:08:24Z) - Dimension-free Score Matching and Time Bootstrapping for Diffusion Models [19.62665684173391]
Diffusion models generate samples by estimating the score function of the target distribution at various noise levels.<n>We introduce a martingale-based error decomposition and sharp variance bounds, enabling efficient learning from dependent data.<n>Building on these insights, we propose Bootstrapped Score Matching (BSM), a variance reduction technique that leverages previously learned scores to improve accuracy at higher noise levels.
arXiv Detail & Related papers (2025-02-14T18:32:22Z) - An analysis of the noise schedule for score-based generative models [7.180235086275926]
Score-based generative models (SGMs) aim at estimating a target data distribution by learning score functions using only noise-perturbed samples from the target.<n>Recent literature has focused extensively on assessing the error between the target and estimated distributions, gauging the generative quality through the Kullback-Leibler (KL) divergence and Wasserstein distances.<n>We establish an upper bound for the KL divergence between the target and the estimated distributions, explicitly depending on any time-dependent noise schedule.
arXiv Detail & Related papers (2024-02-07T08:24:35Z) - Risk-Sensitive Diffusion: Robustly Optimizing Diffusion Models with Noisy Samples [58.68233326265417]
Non-image data are prevalent in real applications and tend to be noisy.
Risk-sensitive SDE is a type of differential equation (SDE) parameterized by the risk vector.
We conduct systematic studies for both Gaussian and non-Gaussian noise distributions.
arXiv Detail & Related papers (2024-02-03T08:41:51Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - FP-Diffusion: Improving Score-based Diffusion Models by Enforcing the
Underlying Score Fokker-Planck Equation [72.19198763459448]
We learn a family of noise-conditional score functions corresponding to the data density perturbed with increasingly large amounts of noise.
These perturbed data densities are linked together by the Fokker-Planck equation (FPE), a partial differential equation (PDE) governing the spatial-temporal evolution of a density.
We derive a corresponding equation called the score FPE that characterizes the noise-conditional scores of the perturbed data densities.
arXiv Detail & Related papers (2022-10-09T16:27:25Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.