ProFlow: Zero-Shot Physics-Consistent Sampling via Proximal Flow Guidance
- URL: http://arxiv.org/abs/2601.20227v1
- Date: Wed, 28 Jan 2026 03:57:00 GMT
- Title: ProFlow: Zero-Shot Physics-Consistent Sampling via Proximal Flow Guidance
- Authors: Zichao Yu, Ming Li, Wenyi Zhang, Difan Zou, Weiguo Gao,
- Abstract summary: ProFlow is a framework for zero-shot physics-consistent sampling.<n>It reconciles strict physical consistency and observational fidelity with the statistical structure of the pre-trained prior.<n>It achieves superior physical and observational consistency, as well as more accurate distributional statistics.
- Score: 35.08166384258028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring physical fields from sparse observations while strictly satisfying partial differential equations (PDEs) is a fundamental challenge in computational physics. Recently, deep generative models offer powerful data-driven priors for such inverse problems, yet existing methods struggle to enforce hard physical constraints without costly retraining or disrupting the learned generative prior. Consequently, there is a critical need for a sampling mechanism that can reconcile strict physical consistency and observational fidelity with the statistical structure of the pre-trained prior. To this end, we present ProFlow, a proximal guidance framework for zero-shot physics-consistent sampling, defined as inferring solutions from sparse observations using a fixed generative prior without task-specific retraining. The algorithm employs a rigorous two-step scheme that alternates between: (\romannumeral1) a terminal optimization step, which projects the flow prediction onto the intersection of the physically and observationally consistent sets via proximal minimization; and (\romannumeral2) an interpolation step, which maps the refined state back to the generative trajectory to maintain consistency with the learned flow probability path. This procedure admits a Bayesian interpretation as a sequence of local maximum a posteriori (MAP) updates. Comprehensive benchmarks on Poisson, Helmholtz, Darcy, and viscous Burgers' equations demonstrate that ProFlow achieves superior physical and observational consistency, as well as more accurate distributional statistics, compared to state-of-the-art diffusion- and flow-based baselines.
Related papers
- Physics-informed neural particle flow for the Bayesian update step [0.8220217498103312]
We propose a physics-informed neural particle flow, which is an amortized inference framework.<n>By embedding a governing partial differential equation (PDE) into the loss function, we train a neural network to approximate the transport velocity field.<n>We demonstrate that the neural parameterization acts as an implicit regularizer, mitigating the stiffness inherent to analytic flows.
arXiv Detail & Related papers (2026-02-26T15:10:45Z) - Accelerated Sequential Flow Matching: A Bayesian Filtering Perspective [16.29333060724397]
We introduce Sequential Flow Matching, a principled framework grounded in Bayesian filtering.<n>By treating streaming inference as learning a probability flow that transports the predictive distribution from one time step to the next, our approach naturally aligns with the structure of Bayesian belief updates.<n>Our method achieves performance competitive with full-step diffusion while requiring only one or very few sampling steps, therefore with faster sampling.
arXiv Detail & Related papers (2026-02-05T05:37:14Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - DAISI: Data Assimilation with Inverse Sampling using Stochastic Interpolants [12.587156528707796]
We introduce DAISI, a scalable filtering algorithm built on flow-based generative models.<n>We show that DAISI achieves accurate filtering results in regimes with sparse, noisy, and nonlinear observations.
arXiv Detail & Related papers (2025-11-29T00:02:45Z) - The Coverage Principle: How Pre-Training Enables Post-Training [70.25788947586297]
We study how pre-training shapes the success of the final model.<n>We uncover a mechanism that explains the power of coverage in predicting downstream performance.
arXiv Detail & Related papers (2025-10-16T17:53:50Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Flow Matching Meets PDEs: A Unified Framework for Physics-Constrained Generation [21.321570407292263]
We propose Physics-Based Flow Matching, a generative framework that embeds physical constraints, both PDE residuals and algebraic relations, into the flow matching objective.<n>We show that our approach yields up to an $8times$ more accurate physical residuals compared to FM, while clearly outperforming existing algorithms in terms of distributional accuracy.
arXiv Detail & Related papers (2025-06-10T09:13:37Z) - FlowDAS: A Stochastic Interpolant-based Framework for Data Assimilation [15.64941169350615]
Data assimilation (DA) integrates observations with a dynamical model to estimate states of PDE-governed systems.<n>FlowDAS is a generative DA framework that uses interpolants to learn state transition dynamics.<n>We show that FlowDAS surpasses model-driven methods, neural operators, and score-based baselines in accuracy and physical plausibility.
arXiv Detail & Related papers (2025-01-13T05:03:41Z) - Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - Benchmarking Autoregressive Conditional Diffusion Models for Turbulent Flow Simulation [26.520247496906492]
In this work, we analyze if fully data-driven fluid solvers that utilize an autoregressive rollout based on conditional diffusion models are a viable option to address this challenge.<n>To quantitatively and qualitatively benchmark the performance of various flow prediction approaches, three challenging 2D scenarios including incompressible and transonic flows, as well as isotropic turbulence are employed.<n>We find that even simple diffusion-based approaches can outperform multiple established flow prediction methods in terms of accuracy and temporal stability, while being on par with state-of-the-art stabilization techniques like unrolling at training time.
arXiv Detail & Related papers (2023-09-04T18:01:42Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Composing Normalizing Flows for Inverse Problems [89.06155049265641]
We propose a framework for approximate inference that estimates the target conditional as a composition of two flow models.
Our method is evaluated on a variety of inverse problems and is shown to produce high-quality samples with uncertainty.
arXiv Detail & Related papers (2020-02-26T19:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.