OneFlowSBI: One Model, Many Queries for Simulation-Based Inference
- URL: http://arxiv.org/abs/2601.22951v1
- Date: Fri, 30 Jan 2026 13:14:44 GMT
- Title: OneFlowSBI: One Model, Many Queries for Simulation-Based Inference
- Authors: Mayank Nautiyal, Li Ju, Melker Ernfors, Klara Hagland, Ville Holma, Maximilian Werkö Söderholm, Andreas Hellander, Prashant Singh,
- Abstract summary: textitOneFlow SBI is a unified framework for simulation-based inference.<n>It learns a single flow-matching generative model over the joint distribution of parameters and observations.<n>It supports multiple inference tasks, including posterior sampling, likelihood estimation, and arbitrary conditional distributions.
- Score: 2.614875980890442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce \textit{OneFlowSBI}, a unified framework for simulation-based inference that learns a single flow-matching generative model over the joint distribution of parameters and observations. Leveraging a query-aware masking distribution during training, the same model supports multiple inference tasks, including posterior sampling, likelihood estimation, and arbitrary conditional distributions, without task-specific retraining. We evaluate \textit{OneFlowSBI} on ten benchmark inference problems and two high-dimensional real-world inverse problems across multiple simulation budgets. \textit{OneFlowSBI} is shown to deliver competitive performance against state-of-the-art generalized inference solvers and specialized posterior estimators, while enabling efficient sampling with few ODE integration steps and remaining robust under noisy and partially observed data.
Related papers
- UniT: Unified Multimodal Chain-of-Thought Test-time Scaling [85.590774707406]
Unified models can handle both multimodal understanding and generation within a single architecture, yet they typically operate in a single pass without iteratively refining their outputs.<n>We introduce UniT, a framework for multimodal test-time scaling that enables a single unified model to reason, verify, and refine across multiple rounds.
arXiv Detail & Related papers (2026-02-12T18:59:49Z) - Diffusion Models in Simulation-Based Inference: A Tutorial Review [9.572470603492077]
Diffusion models have emerged as powerful learners for simulation-based inference ( SBI)<n>In this tutorial review, we synthesize recent developments on diffusion models for SBI.<n>We highlight opportunities created by various concepts such as guidance, score composition, flow matching, consistency models, and joint modeling.
arXiv Detail & Related papers (2025-12-22T15:10:35Z) - Flow Matching for Robust Simulation-Based Inference under Model Misspecification [11.172752919335394]
Flow Matching Corrected Posterior Estimation is a framework that refines simulation-trained posterior estimators using a small set of real calibration samples.<n>We show that our proposal consistently mitigates the effects of misspecification, delivering improved inference accuracy and uncertainty calibration compared to standard SBI baselines.
arXiv Detail & Related papers (2025-09-27T16:10:53Z) - Learning Discrete Bayesian Networks with Hierarchical Dirichlet Shrinkage [52.914168158222765]
We detail a comprehensive Bayesian framework for learning DBNs.<n>We give a novel Markov chain Monte Carlo (MCMC) algorithm utilizing parallel Langevin proposals to generate exact posterior samples.<n>We apply our methodology to uncover prognostic network structure from primary breast cancer samples.
arXiv Detail & Related papers (2025-09-16T17:24:35Z) - ConDiSim: Conditional Diffusion Models for Simulation Based Inference [2.1493648495606354]
ConDiSim is a conditional diffusion model for simulation-based inference of complex systems with intractable likelihoods.<n>It is evaluated across ten benchmark problems and two real-world test problems, where it demonstrates effective posterior approximation accuracy.
arXiv Detail & Related papers (2025-05-13T09:58:23Z) - Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.<n>The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.<n>The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Unified Convergence Analysis for Score-Based Diffusion Models with Deterministic Samplers [49.1574468325115]
We introduce a unified convergence analysis framework for deterministic samplers.
Our framework achieves iteration complexity of $tilde O(d2/epsilon)$.
We also provide a detailed analysis of Denoising Implicit Diffusion Models (DDIM)-type samplers.
arXiv Detail & Related papers (2024-10-18T07:37:36Z) - Amortized Posterior Sampling with Diffusion Prior Distillation [55.03585818289934]
Amortized Posterior Sampling is a novel variational inference approach for efficient posterior sampling in inverse problems.<n>Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model.<n>Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains.
arXiv Detail & Related papers (2024-07-25T09:53:12Z) - Sample Complexity Characterization for Linear Contextual MDPs [67.79455646673762]
Contextual decision processes (CMDPs) describe a class of reinforcement learning problems in which the transition kernels and reward functions can change over time with different MDPs indexed by a context variable.
CMDPs serve as an important framework to model many real-world applications with time-varying environments.
We study CMDPs under two linear function approximation models: Model I with context-varying representations and common linear weights for all contexts; and Model II with common representations for all contexts and context-varying linear weights.
arXiv Detail & Related papers (2024-02-05T03:25:04Z) - Consistency Models for Scalable and Fast Simulation-Based Inference [9.27488642055461]
We present consistency models for posterior estimation (CMPE), a new conditional sampler for simulation-based inference ( SBI)
CMPE essentially distills a continuous probability flow and enables rapid few-shot inference with an unconstrained architecture.
Our empirical evaluation demonstrates that CMPE not only outperforms current state-of-the-art algorithms on hard low-dimensional benchmarks, but also achieves competitive performance with much faster sampling speed.
arXiv Detail & Related papers (2023-12-09T02:14:12Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Sequential Likelihood-Free Inference with Implicit Surrogate Proposal [24.20924279100816]
This paper introduces Implicit Surrogate Proposal (ISP) to generate a cumulated dataset with further sample efficiency.
ISP constructs the cumulative dataset in the most diverse way by drawing i.i.d samples via a feed-forward fashion.
We demonstrate that ISP outperforms the baseline inference algorithms on simulations with multi-modal posteriors.
arXiv Detail & Related papers (2020-10-15T08:59:23Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.