Diffusion-DFL: Decision-focused Diffusion Models for Stochastic Optimization
- URL: http://arxiv.org/abs/2510.11590v1
- Date: Mon, 13 Oct 2025 16:31:17 GMT
- Title: Diffusion-DFL: Decision-focused Diffusion Models for Stochastic Optimization
- Authors: Zihao Zhao, Christopher Yeh, Lingkai Kong, Kai Wang,
- Abstract summary: Decision-focused learning (DFL) integrates predictive modeling and optimization by training predictors to optimize the downstream decision target.<n>We propose the first diffusion-based DFL approach, which trains a diffusion model to represent the distribution of uncertain parameters.<n>We show that our diffusion DFL approach consistently outperforms strong baselines in decision quality.
- Score: 15.091370734240234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision-focused learning (DFL) integrates predictive modeling and optimization by training predictors to optimize the downstream decision target rather than merely minimizing prediction error. To date, existing DFL methods typically rely on deterministic point predictions, which are often insufficient to capture the intrinsic stochasticity of real-world environments. To address this challenge, we propose the first diffusion-based DFL approach, which trains a diffusion model to represent the distribution of uncertain parameters and optimizes the decision by solving a stochastic optimization with samples drawn from the diffusion model. Our contributions are twofold. First, we formulate diffusion DFL using the reparameterization trick, enabling end-to-end training through diffusion. While effective, it is memory and compute-intensive due to the need to differentiate through the diffusion sampling process. Second, we propose a lightweight score function estimator that uses only several forward diffusion passes and avoids backpropagation through the sampling. This follows from our results that backpropagating through stochastic optimization can be approximated by a weighted score function formulation. We empirically show that our diffusion DFL approach consistently outperforms strong baselines in decision quality. The source code for all experiments is available at the project repository: https://github.com/GT-KOALA/Diffusion_DFL.
Related papers
- Fast Sampling for Flows and Diffusions with Lazy and Point Mass Stochastic Interpolants [5.492889521988414]
We prove how to convert a sample path of a differential equation (SDE) with arbitrary diffusion coefficient under any schedule.<n>We then extend the interpolant framework to admit a larger class of point mass schedules.
arXiv Detail & Related papers (2026-02-03T17:48:34Z) - DIVE: Inverting Conditional Diffusion Models for Discriminative Tasks [79.50756148780928]
This paper studies the problem of leveraging pretrained diffusion models for performing discriminative tasks.<n>We extend the discriminative capability of pretrained frozen generative diffusion models from the classification task to the more complex object detection task, by "inverting" a pretrained layout-to-image diffusion model.
arXiv Detail & Related papers (2025-04-24T05:13:27Z) - Training-free Diffusion Model Alignment with Sampling Demons [15.400553977713914]
We propose an optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining.<n>Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through optimization.<n>Our experiments show that the proposed approach significantly improves the average aesthetics scores text-to-image generation.
arXiv Detail & Related papers (2024-10-08T07:33:49Z) - Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction [29.834614425056355]
We introduce Lotus, a diffusion-based visual foundation model with a simple yet effective adaptation protocol for dense prediction.<n>Specifically, Lotus is trained to directly predict annotations instead of noise, thereby avoiding harmful variance.<n>We introduce a novel tuning strategy called detail preserver, which achieves more accurate and fine-grained predictions.
arXiv Detail & Related papers (2024-09-26T17:58:55Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.<n>We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.<n>Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Variational Schrödinger Diffusion Models [14.480273869571468]
Schr"odinger bridge (SB) has emerged as the go-to method for optimizing transportation plans in diffusion models.<n>We leverage variational inference to linearize the forward score functions (variational scores) of SB.<n>We propose the variational Schr"odinger diffusion model (VSDM), where the forward process is a multivariate diffusion and the variational scores are adaptively optimized for efficient transport.
arXiv Detail & Related papers (2024-05-08T04:01:40Z) - Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints [55.39203337683045]
We propose to perform optimization within the data manifold using diffusion models.<n>Depending on the differentiability of the objective function, we propose two different sampling methods.<n>Our method achieves better or comparable performance with previous state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-28T03:09:12Z) - Diffusion Models with Deterministic Normalizing Flow Priors [21.24885597341643]
We propose DiNof ($textbfDi$ffusion with $textbfNo$rmalizing $textbff$low priors), a technique that makes use of normalizing flows and diffusion models.<n>Experiments on standard image generation datasets demonstrate the advantage of the proposed method over existing approaches.
arXiv Detail & Related papers (2023-09-03T21:26:56Z) - DF2: Distribution-Free Decision-Focused Learning [30.288876294435294]
Decision-focused learning (DFL) has emerged as a powerful approach for predict-then-optimize problems.<n>DFL faces three bottlenecks: model error, sample average approximation error, and approximation error.<n>We present DF2, the first decision-free learning method designed to mitigate these three bottlenecks.
arXiv Detail & Related papers (2023-08-11T00:44:46Z) - Reconstructing Graph Diffusion History from a Single Snapshot [87.20550495678907]
We propose a novel barycenter formulation for reconstructing Diffusion history from A single SnapsHot (DASH)
We prove that estimation error of diffusion parameters is unavoidable due to NP-hardness of diffusion parameter estimation.
We also develop an effective solver named DIffusion hiTting Times with Optimal proposal (DITTO)
arXiv Detail & Related papers (2023-06-01T09:39:32Z) - Diffusion Models are Minimax Optimal Distribution Estimators [49.47503258639454]
We provide the first rigorous analysis on approximation and generalization abilities of diffusion modeling.
We show that when the true density function belongs to the Besov space and the empirical score matching loss is properly minimized, the generated data distribution achieves the nearly minimax optimal estimation rates.
arXiv Detail & Related papers (2023-03-03T11:31:55Z) - Where to Diffuse, How to Diffuse, and How to Get Back: Automated
Learning for Multivariate Diffusions [22.04182099405728]
Diffusion-based generative models (DBGMs) perturb data to a target noise distribution and reverse this inference diffusion process to generate samples.
We show how to maximize a lower-bound on the likelihood for any number of auxiliary variables.
We then demonstrate how to parameterize the diffusion for a specified target noise distribution.
arXiv Detail & Related papers (2023-02-14T18:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.