On the Collapse of Generative Paths: A Criterion and Correction for Diffusion Steering
- URL: http://arxiv.org/abs/2512.10339v1
- Date: Thu, 11 Dec 2025 06:44:08 GMT
- Title: On the Collapse of Generative Paths: A Criterion and Correction for Diffusion Steering
- Authors: Ziseok Lee, Minyeong Hwang, Sanghyun Jo, Wooyeol Lee, Jihyung Ko, Young Bin Park, Jae-Mun Choi, Eunho Yang, Kyungsu Kim,
- Abstract summary: In-time steering enables pretrained diffusion/flow models to be adapted to new tasks without retraining.<n>This construction harbors a critical and previously unformalized failure mode: Marginal Path Collapse.<n>We introduce Adaptive path Correction with Exponents (ACE), which extends Feynman-Kac steering to time-varying exponents.
- Score: 29.633206995806542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inference-time steering enables pretrained diffusion/flow models to be adapted to new tasks without retraining. A widely used approach is the ratio-of-densities method, which defines a time-indexed target path by reweighting probability-density trajectories from multiple models with positive, or in some cases, negative exponents. This construction, however, harbors a critical and previously unformalized failure mode: Marginal Path Collapse, where intermediate densities become non-normalizable even though endpoints remain valid. Collapse arises systematically when composing heterogeneous models trained on different noise schedules or datasets, including a common setting in molecular design where de-novo, conformer, and pocket-conditioned models must be combined for tasks such as flexible-pose scaffold decoration. We provide a novel and complete solution for the problem. First, we derive a simple path existence criterion that predicts exactly when collapse occurs from noise schedules and exponents alone. Second, we introduce Adaptive path Correction with Exponents (ACE), which extends Feynman-Kac steering to time-varying exponents and guarantees a valid probability path. On a synthetic 2D benchmark and on flexible-pose scaffold decoration, ACE eliminates collapse and enables high-guidance compositional generation, improving distributional and docking metrics over constant-exponent baselines and even specialized task-specific scaffold decoration models. Our work turns ratio-of-densities steering with heterogeneous experts from an unstable heuristic into a reliable tool for controllable generation.
Related papers
- Bridge Matching Sampler: Scalable Sampling via Generalized Fixed-Point Diffusion Matching [38.70740405520393]
Bridge Matching Sampler (BMS) enables learning a transport map between arbitrary prior and target distributions with a single, scalable, and stable objective.<n>We demonstrate that our method enables sampling at unprecedented scales while preserving mode diversity, achieving state-of-the-art results on complex synthetic densities and high-dimensional molecular benchmarks.
arXiv Detail & Related papers (2026-02-28T08:00:38Z) - GenPANIS: A Latent-Variable Generative Framework for Forward and Inverse PDE Problems in Multiphase Media [0.8594140167290095]
Inverse problems and inverse design in multiphase media require operating on discrete-valued material fields.<n>We propose GenPANIS, a unified generative framework that preserves exact discrete microstructures.<n>A physics-aware decoder incorporating a differentiable coarse-grained PDE solver preserves governing equation structure.
arXiv Detail & Related papers (2026-02-16T11:08:30Z) - The Procrustean Bed of Time Series: The Optimization Bias of Point-wise Loss [53.542743390809356]
This paper aims to provide a first-principles analysis of the Expectation of Optimization Bias (EOB)<n>Our analysis reveals a fundamental paradigm paradox: the more deterministic and structured the time series, the more severe the bias by point-wise loss function.<n>We present a concrete solution that simultaneously achieves both principles via DFT or DWT.
arXiv Detail & Related papers (2025-12-21T06:08:22Z) - Entropy-Reservoir Bregman Projection: An Information-Geometric Unification of Model Collapse [3.533187668612022]
We present EntropyReser Bregman Projection- ERBP, an information-geometric framework that unifies these phenomena.<n>Our theory yields a necessary condition for collapse, (ii) a sufficient condition that guarantees a non-language entropy floor, and (iii) closed-form rates that depend on sample size.
arXiv Detail & Related papers (2025-12-16T19:50:03Z) - Worst-case generation via minimax optimization in Wasserstein space [19.645939141861543]
Worst-case generation plays a critical role in evaluating robustness and stress-testing systems under distribution shifts.<n>We develop a generative modeling framework for worst-case generation for a pre-specified risk.
arXiv Detail & Related papers (2025-12-09T02:11:08Z) - FlowPath: Learning Data-Driven Manifolds with Invertible Flows for Robust Irregularly-sampled Time Series Classification [14.643457217551484]
We propose FlowPath, a novel approach that learns the geometry of the control path via an invertible neural flow.<n>We show that FlowPath consistently achieves statistically significant improvements in classification accuracy over baselines using fixed interpolants or non-invertible architectures.
arXiv Detail & Related papers (2025-11-13T22:59:26Z) - ResAD: Normalized Residual Trajectory Modeling for End-to-End Autonomous Driving [64.42138266293202]
ResAD is a Normalized Residual Trajectory Modeling framework.<n>It reframes the learning task to predict the residual deviation from an inertial reference.<n>On the NAVSIM benchmark, ResAD achieves a state-of-the-art PDMS of 88.6 using a vanilla diffusion policy.
arXiv Detail & Related papers (2025-10-09T17:59:36Z) - Latent Iterative Refinement Flow: A Geometric-Constrained Approach for Few-Shot Generation [5.062604189239418]
We introduce Latent Iterative Refinement Flow (LIRF), a novel approach to few-shot generation.<n>LIRF establishes a stable latent space using an autoencoder trained with our novel textbfmanifold-preservation loss.<n>Within this cycle, candidate samples are refined by a geometric textbfcorrection operator, a provably contractive mapping.
arXiv Detail & Related papers (2025-09-24T08:57:21Z) - Solving Inverse Problems with FLAIR [68.87167940623318]
We present FLAIR, a training-free variational framework that leverages flow-based generative models as prior for inverse problems.<n>Results on standard imaging benchmarks demonstrate that FLAIR consistently outperforms existing diffusion- and flow-based methods in terms of reconstruction quality and sample diversity.
arXiv Detail & Related papers (2025-06-03T09:29:47Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Non-adversarial training of Neural SDEs with signature kernel scores [4.721845865189578]
State-of-the-art performance for irregular time series generation has been previously obtained by training these models adversarially as GANs.
In this paper, we introduce a novel class of scoring rules on pathspace based on signature kernels.
arXiv Detail & Related papers (2023-05-25T17:31:18Z) - Stochastic Interpolants: A Unifying Framework for Flows and Diffusions [18.299322342860517]
A class of generative models that unifies flow-based and diffusion-based methods is introduced.<n>These models extend the framework proposed in Albergo and VandenEijnden (2023), enabling the use of a broad class of continuous-time processes called interpolants.
arXiv Detail & Related papers (2023-03-15T17:43:42Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.