Inference in conditioned dynamics through causality restoration
- URL: http://arxiv.org/abs/2210.10179v2
- Date: Thu, 30 Mar 2023 11:52:39 GMT
- Title: Inference in conditioned dynamics through causality restoration
- Authors: Alfredo Braunstein, Giovanni Catania, Luca Dall'Asta, Matteo Mariani,
Anna Paola Muntoni
- Abstract summary: We propose an alternative method to produce independent samples from a conditioned distribution.
The method learns the parameters of a generalized dynamical model.
We discuss an important application of the method, namely the problem of epidemic risk assessment from (imperfect) clinical tests.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computing observables from conditioned dynamics is typically computationally
hard, because, although obtaining independent samples efficiently from the
unconditioned dynamics is usually feasible, generally most of the samples must
be discarded (in a form of importance sampling) because they do not satisfy the
imposed conditions. Sampling directly from the conditioned distribution is
non-trivial, as conditioning breaks the causal properties of the dynamics which
ultimately renders the sampling procedure efficient. One standard way of
achieving it is through a Metropolis Monte-Carlo procedure, but this procedure
is normally slow and a very large number of Monte-Carlo steps is needed to
obtain a small number of statistically independent samples. In this work, we
propose an alternative method to produce independent samples from a conditioned
distribution. The method learns the parameters of a generalized dynamical model
that optimally describe the conditioned distribution in a variational sense.
The outcome is an effective, unconditioned, dynamical model, from which one can
trivially obtain independent samples, effectively restoring causality of the
conditioned distribution. The consequences are twofold: on the one hand, it
allows us to efficiently compute observables from the conditioned dynamics by
simply averaging over independent samples. On the other hand, the method gives
an effective unconditioned distribution which is easier to interpret. The
method is flexible and can be applied virtually to any dynamics. We discuss an
important application of the method, namely the problem of epidemic risk
assessment from (imperfect) clinical tests, for a large family of
time-continuous epidemic models endowed with a Gillespie-like sampler. We show
that the method compares favorably against the state of the art, including the
soft-margin approach and mean-field methods.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Conditional Pseudo-Reversible Normalizing Flow for Surrogate Modeling in Quantifying Uncertainty Propagation [11.874729463016227]
We introduce a conditional pseudo-reversible normalizing flow for constructing surrogate models of a physical model polluted by additive noise.
The training process utilizes dataset consisting of input-output pairs without requiring prior knowledge about the noise and the function.
Our model, once trained, can generate samples from any conditional probability density functions whose high probability regions are covered by the training set.
arXiv Detail & Related papers (2024-03-31T00:09:58Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - User-defined Event Sampling and Uncertainty Quantification in Diffusion
Models for Physical Dynamical Systems [49.75149094527068]
We show that diffusion models can be adapted to make predictions and provide uncertainty quantification for chaotic dynamical systems.
We develop a probabilistic approximation scheme for the conditional score function which converges to the true distribution as the noise level decreases.
We are able to sample conditionally on nonlinear userdefined events at inference time, and matches data statistics even when sampling from the tails of the distribution.
arXiv Detail & Related papers (2023-06-13T03:42:03Z) - A Flow-Based Generative Model for Rare-Event Simulation [0.483420384410068]
We present a method in which a Normalizing Flow generative model is trained to simulate samples directly from a conditional distribution.
We illustrate that by simulating directly from a rare-event distribution significant insight can be gained into the way rare events happen.
arXiv Detail & Related papers (2023-05-13T08:25:57Z) - Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples [0.7087237546722617]
Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
arXiv Detail & Related papers (2023-02-09T21:28:15Z) - Bi-Noising Diffusion: Towards Conditional Diffusion Models with
Generative Restoration Priors [64.24948495708337]
We introduce a new method that brings predicted samples to the training data manifold using a pretrained unconditional diffusion model.
We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution, colorization, turbulence removal, and image-deraining tasks.
arXiv Detail & Related papers (2022-12-14T17:26:35Z) - Learning Multivariate CDFs and Copulas using Tensor Factorization [39.24470798045442]
Learning the multivariate distribution of data is a core challenge in statistics and machine learning.
In this work, we aim to learn multivariate cumulative distribution functions (CDFs), as they can handle mixed random variables.
We show that any grid sampled version of a joint CDF of mixed random variables admits a universal representation as a naive Bayes model.
We demonstrate the superior performance of the proposed model in several synthetic and real datasets and applications including regression, sampling and data imputation.
arXiv Detail & Related papers (2022-10-13T16:18:46Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.