Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples
- URL: http://arxiv.org/abs/2302.04945v1
- Date: Thu, 9 Feb 2023 21:28:15 GMT
- Title: Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples
- Authors: Danial Khatamsaz, Vahid Attari, Raymundo Arroyave, and Douglas L.
Allaire
- Abstract summary: Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
- Score: 0.7087237546722617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty analysis in the outcomes of model predictions is a key element in
decision-based material design to establish confidence in the models and
evaluate the fidelity of models. Uncertainty Propagation (UP) is a technique to
determine model output uncertainties based on the uncertainty in its input
variables. The most common and simplest approach to propagate the uncertainty
from a model inputs to its outputs is by feeding a large number of samples to
the model, known as Monte Carlo (MC) simulation which requires exhaustive
sampling from the input variable distributions. However, MC simulations are
impractical when models are computationally expensive. In this work, we
investigate the hypothesis that while all samples are useful on average, some
samples must be more useful than others. Thus, reordering MC samples and
propagating more useful samples can lead to enhanced convergence in statistics
of interest earlier and thus, reducing the computational burden of UP process.
Here, we introduce a methodology to adaptively reorder MC samples and show how
it results in reduction of computational expense of UP processes.
Related papers
- Informed Correctors for Discrete Diffusion Models [32.87362154118195]
We propose a family of informed correctors that more reliably counteracts discretization error by leveraging information learned by the model.
We also propose $k$-Gillespie's, a sampling algorithm that better utilizes each model evaluation, while still enjoying the speed and flexibility of $tau$-leaping.
Across several real and synthetic datasets, we show that $k$-Gillespie's with informed correctors reliably produces higher quality samples at lower computational cost.
arXiv Detail & Related papers (2024-07-30T23:29:29Z) - Provable Statistical Rates for Consistency Diffusion Models [87.28777947976573]
Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved.
This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem.
arXiv Detail & Related papers (2024-06-23T20:34:18Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Robust Output Analysis with Monte-Carlo Methodology [0.0]
In predictive modeling with simulation or machine learning, it is critical to accurately assess the quality of estimated values.
We propose a unified output analysis framework for simulation and machine learning outputs through the lens of Monte Carlo sampling.
arXiv Detail & Related papers (2022-07-27T16:21:59Z) - Correcting Model Bias with Sparse Implicit Processes [0.9187159782788579]
We show that Sparse Implicit Processes (SIP) is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model.
We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.
arXiv Detail & Related papers (2022-07-21T18:00:01Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Quantifying the Uncertainty in Model Parameters Using Gaussian
Process-Based Markov Chain Monte Carlo: An Application to Cardiac
Electrophysiological Models [7.8316005711996235]
Estimates of patient-specific model parameters are important for personalized modeling.
Standard Markov Chain Monte Carlo sampling requires repeated model simulations that are computationally infeasible.
A common solution is to replace the simulation model with a computationally-efficient surrogate for a faster sampling.
arXiv Detail & Related papers (2020-06-02T23:48:15Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.