Proximal Diffusion Neural Sampler
- URL: http://arxiv.org/abs/2510.03824v1
- Date: Sat, 04 Oct 2025 14:44:47 GMT
- Title: Proximal Diffusion Neural Sampler
- Authors: Wei Guo, Jaemoo Choi, Yuchen Zhu, Molei Tao, Yongxin Chen,
- Abstract summary: We propose a framework named textbfProximal Diffusion Neural Sampler (PDNS) that tackles the optimal control problem via proximal point method on the space of path measures.<n>PDNS decomposes the learning process into a series of simpler subproblems that create a path gradually approaching the desired distribution.<n>We demonstrate the effectiveness and robustness of PDNS through extensive experiments on both continuous and discrete sampling tasks.
- Score: 45.335824474545795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of learning a diffusion-based neural sampler for drawing samples from an unnormalized target distribution can be viewed as a stochastic optimal control problem on path measures. However, the training of neural samplers can be challenging when the target distribution is multimodal with significant barriers separating the modes, potentially leading to mode collapse. We propose a framework named \textbf{Proximal Diffusion Neural Sampler (PDNS)} that addresses these challenges by tackling the stochastic optimal control problem via proximal point method on the space of path measures. PDNS decomposes the learning process into a series of simpler subproblems that create a path gradually approaching the desired distribution. This staged procedure traces a progressively refined path to the desired distribution and promotes thorough exploration across modes. For a practical and efficient realization, we instantiate each proximal step with a proximal weighted denoising cross-entropy (WDCE) objective. We demonstrate the effectiveness and robustness of PDNS through extensive experiments on both continuous and discrete sampling tasks, including challenging scenarios in molecular dynamics and statistical physics.
Related papers
- G$^2$RPO: Granular GRPO for Precise Reward in Flow Models [74.21206048155669]
We propose a novel Granular-GRPO (G$2$RPO) framework that achieves precise and comprehensive reward assessments of sampling directions.<n>We introduce a Multi-Granularity Advantage Integration module that aggregates advantages computed at multiple diffusion scales.<n>Our G$2$RPO significantly outperforms existing flow-based GRPO baselines.
arXiv Detail & Related papers (2025-10-02T12:57:12Z) - MDNS: Masked Diffusion Neural Sampler via Stochastic Optimal Control [28.6806212080778]
We study the problem of learning a neural sampler to generate samples from discrete state spaces.<n>We propose $textbfM$asked $textbfDtextiffusion, a novel framework for discrete neural samplers.
arXiv Detail & Related papers (2025-08-14T14:27:16Z) - Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics [7.873510219469276]
We introduce two novel training methods for discrete diffusion samplers.<n>These methods yield memory-efficient training and achieve state-of-the-art results in unsupervised optimization.<n>We introduce adaptations of SN-NIS and Neural Chain Monte Carlo that enable for the first time the application of discrete diffusion models to this problem.
arXiv Detail & Related papers (2025-02-12T18:59:55Z) - Inferring biological processes with intrinsic noise from cross-sectional data [0.8192907805418583]
Inferring dynamical models from data continues to be a significant challenge in computational biology.<n>We show that probability flow inference (PFI) disentangles force from intrinsicity while retaining the algorithmic ease of ODE inference.<n>In practical applications, we show that PFI enables accurate parameter and force estimation in high-dimensional reaction networks, and that it allows inference of cell differentiation dynamics with molecular noise.
arXiv Detail & Related papers (2024-10-10T00:33:25Z) - Adaptive teachers for amortized samplers [76.88721198565861]
We propose an adaptive training distribution (the teacher) to guide the training of the primary amortized sampler (the student)<n>We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge.
arXiv Detail & Related papers (2024-10-02T11:33:13Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Diffusion-ES: Gradient-free Planning with Diffusion for Autonomous Driving and Zero-Shot Instruction Following [21.81411085058986]
Reward-gradient guided denoising generates trajectories that maximize both a differentiable reward function and the likelihood under the data distribution captured by a diffusion model.
We propose DiffusionES, a method that combines gradient-free optimization with trajectory denoising.
We show that DiffusionES achieves state-of-the-art performance on nuPlan, an established closed-loop planning benchmark for autonomous driving.
arXiv Detail & Related papers (2024-02-09T17:18:33Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.