Coping With Simulators That Don't Always Return
- URL: http://arxiv.org/abs/2003.12908v1
- Date: Sat, 28 Mar 2020 23:05:10 GMT
- Title: Coping With Simulators That Don't Always Return
- Authors: Andrew Warrington, Saeid Naderiparizi, Frank Wood
- Abstract summary: We investigate inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs.
We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability.
- Score: 15.980496707498535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deterministic models are approximations of reality that are easy to interpret
and often easier to build than stochastic alternatives. Unfortunately, as
nature is capricious, observational data can never be fully explained by
deterministic models in practice. Observation and process noise need to be
added to adapt deterministic models to behave stochastically, such that they
are capable of explaining and extrapolating from noisy data. We investigate and
address computational inefficiencies that arise from adding process noise to
deterministic simulators that fail to return for certain inputs; a property we
describe as "brittle." We show how to train a conditional normalizing flow to
propose perturbations such that the simulator succeeds with high probability,
increasing computational efficiency.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Differentiable Calibration of Inexact Stochastic Simulation Models via Kernel Score Minimization [11.955062839855334]
We propose to learn differentiable input parameters of simulation models using output-level data via kernel score minimization with gradient descent.
We quantify the uncertainties of the learned input parameters using a new normality result that accounts for model inexactness.
arXiv Detail & Related papers (2024-11-08T04:13:52Z) - Neural Likelihood Approximation for Integer Valued Time Series Data [0.0]
We construct a neural likelihood approximation that can be trained using unconditional simulation of the underlying model.
We demonstrate our method by performing inference on a number of ecological and epidemiological models.
arXiv Detail & Related papers (2023-10-19T07:51:39Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Continuous Optimization Benchmarks by Simulation [0.0]
Benchmark experiments are required to test, compare, tune, and understand optimization algorithms.
Data from previous evaluations can be used to train surrogate models which are then used for benchmarking.
We show that the spectral simulation method enables simulation for continuous optimization problems.
arXiv Detail & Related papers (2020-08-14T08:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.