Cost-aware simulation-based inference
- URL: http://arxiv.org/abs/2410.07930v2
- Date: Mon, 17 Feb 2025 13:45:03 GMT
- Title: Cost-aware simulation-based inference
- Authors: Ayush Bharti, Daolang Huang, Samuel Kaski, François-Xavier Briol,
- Abstract summary: We propose textitcost-aware SBI methods, which can significantly reduce the cost of existing sampling-based SBI methods.<n>Our approach is studied extensively on models from epidemiology to telecommunications engineering, where we obtain significant reductions in the overall cost of inference.
- Score: 21.669083918105976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simulation-based inference (SBI) is the preferred framework for estimating parameters of intractable models in science and engineering. A significant challenge in this context is the large computational cost of simulating data from complex models, and the fact that this cost often depends on parameter values. We therefore propose \textit{cost-aware SBI methods} which can significantly reduce the cost of existing sampling-based SBI methods, such as neural SBI and approximate Bayesian computation. This is achieved through a combination of rejection and self-normalised importance sampling, which significantly reduces the number of expensive simulations needed. Our approach is studied extensively on models from epidemiology to telecommunications engineering, where we obtain significant reductions in the overall cost of inference.
Related papers
- Truth in the Few: High-Value Data Selection for Efficient Multi-Modal Reasoning [71.3533541927459]
We propose a novel data selection paradigm termed Activation Reasoning Potential (RAP)<n>RAP identifies cognitive samples by estimating each sample's potential to stimulate genuine multi-modal reasoning.<n>Our RAP method consistently achieves superior performance using only 9.3% of the training data, while reducing computational costs by over 43%.
arXiv Detail & Related papers (2025-06-05T08:40:24Z) - Transfer learning for multifidelity simulation-based inference in cosmology [0.0]
Pre-training on dark-matter-only $N$-body simulations reduces the required number of high-fidelity hydrodynamical simulations by a factor between $8$ and $15$.<n>By leveraging cheaper simulations, our approach enables performant and accurate inference on high-fidelity models while substantially reducing computational costs.
arXiv Detail & Related papers (2025-05-27T14:04:30Z) - Modèles de Substitution pour les Modèles à base d'Agents : Enjeux, Méthodes et Applications [0.0]
Agent-based models (ABM) are widely used to study emergent phenomena arising from local interactions.<n>The complexity of ABM limits their feasibility for real-time decision-making and large-scale scenario analysis.<n>To address these limitations, surrogate models offer an efficient alternative by learning approximations from sparse simulation data.
arXiv Detail & Related papers (2025-05-17T08:55:33Z) - Effortless, Simulation-Efficient Bayesian Inference using Tabular Foundation Models [5.952993835541411]
We show how TabPFN can be used as pre-trained autoregressive conditional density estimators for simulation-based inference.
NPE-PF eliminates the need for inference network selection, training, and hyper parameter tuning.
It exhibits superior robustness to model misspecification and can be scaled to simulation budgets that exceed the context size limit of TabPFN.
arXiv Detail & Related papers (2025-04-24T15:29:39Z) - Parallel simulation for sampling under isoperimetry and score-based diffusion models [56.39904484784127]
As data size grows, reducing the iteration cost becomes an important goal.
Inspired by the success of the parallel simulation of the initial value problem in scientific computation, we propose parallel Picard methods for sampling tasks.
Our work highlights the potential advantages of simulation methods in scientific computation for dynamics-based sampling and diffusion models.
arXiv Detail & Related papers (2024-12-10T11:50:46Z) - Active Sequential Posterior Estimation for Sample-Efficient Simulation-Based Inference [12.019504660711231]
We introduce sequential neural posterior estimation (ASNPE)
ASNPE brings an active learning scheme into the inference loop to estimate the utility of simulation parameter candidates to the underlying probabilistic model.
Our method outperforms well-tuned benchmarks and state-of-the-art posterior estimation methods on a large-scale real-world traffic network.
arXiv Detail & Related papers (2024-12-07T08:57:26Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - A Comprehensive Guide to Simulation-based Inference in Computational Biology [5.333122501732079]
This paper provides comprehensive guidelines for deciding between SBI approaches for complex biological models.
We apply the guidelines to two agent-based models that describe cellular dynamics using real-world data.
Our study unveils a critical insight: while neural SBI methods demand significantly fewer simulations for inference results, they tend to yield biased estimations.
arXiv Detail & Related papers (2024-09-29T12:04:03Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - How Much Data are Enough? Investigating Dataset Requirements for Patch-Based Brain MRI Segmentation Tasks [74.21484375019334]
Training deep neural networks reliably requires access to large-scale datasets.
To mitigate both the time and financial costs associated with model development, a clear understanding of the amount of data required to train a satisfactory model is crucial.
This paper proposes a strategic framework for estimating the amount of annotated data required to train patch-based segmentation networks.
arXiv Detail & Related papers (2024-04-04T13:55:06Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Generalized Bayesian Inference for Scientific Simulators via Amortized
Cost Estimation [11.375835331641548]
We train a neural network to approximate the cost function, which we define as the expected distance between simulations produced by a parameter and observed data.
We show that, on several benchmark tasks, ACE accurately predicts cost and provides predictive simulations that are closer to synthetic observations than other SBI methods.
arXiv Detail & Related papers (2023-05-24T14:45:03Z) - Optimally-Weighted Estimators of the Maximum Mean Discrepancy for
Likelihood-Free Inference [12.157511906467146]
Likelihood-free inference methods typically make use of a distance between simulated and real data.
The maximum mean discrepancy (MMD) is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples.
We propose a novel estimator for the MMD with significantly improved sample complexity.
arXiv Detail & Related papers (2023-01-27T12:13:54Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Black-box Bayesian inference for economic agent-based models [0.0]
We investigate the efficacy of two classes of black-box approximate Bayesian inference methods.
We demonstrate that neural network based black-box methods provide state of the art parameter inference for economic simulation models.
arXiv Detail & Related papers (2022-02-01T18:16:12Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - SBI -- A toolkit for simulation-based inference [0.0]
Simulation-based inference ( SBI) seeks to identify parameter sets that a) are compatible with prior knowledge and b) match empirical observations.
We present $textttsbi$, a PyTorch-based package that implements SBI algorithms based on neural networks.
arXiv Detail & Related papers (2020-07-17T16:53:51Z) - Sample-Efficient Reinforcement Learning of Undercomplete POMDPs [91.40308354344505]
This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of Partially Observable Decision Processes (POMDPs)
We present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.
arXiv Detail & Related papers (2020-06-22T17:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.