Hierarchical Neural Simulation-Based Inference Over Event Ensembles
- URL: http://arxiv.org/abs/2306.12584v2
- Date: Wed, 21 Feb 2024 13:35:38 GMT
- Title: Hierarchical Neural Simulation-Based Inference Over Event Ensembles
- Authors: Lukas Heinrich, Siddharth Mishra-Sharma, Chris Pollard, and Philipp
Windischhofer
- Abstract summary: We introduce approaches for dataset-wide probabilistic inference in cases where the likelihood is intractable.
We construct neural estimators for the likelihood(-ratio) or posterior and show that explicitly accounting for the model's hierarchical structure can lead to significantly tighter parameter constraints.
- Score: 0.4515457784397788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When analyzing real-world data it is common to work with event ensembles,
which comprise sets of observations that collectively constrain the parameters
of an underlying model of interest. Such models often have a hierarchical
structure, where "local" parameters impact individual events and "global"
parameters influence the entire dataset. We introduce practical approaches for
frequentist and Bayesian dataset-wide probabilistic inference in cases where
the likelihood is intractable, but simulations can be realized via a
hierarchical forward model. We construct neural estimators for the
likelihood(-ratio) or posterior and show that explicitly accounting for the
model's hierarchical structure can lead to significantly tighter parameter
constraints. We ground our discussion using case studies from the physical
sciences, focusing on examples from particle physics and cosmology.
Related papers
- All-in-one simulation-based inference [19.41881319338419]
We present a new amortized inference method -- the Simformer -- which overcomes current limitations.
The Simformer outperforms current state-of-the-art amortized inference approaches on benchmark tasks.
It can be applied to models with function-valued parameters, it can handle inference scenarios with missing or unstructured data, and it can sample arbitrary conditionals of the joint distribution of parameters and data.
arXiv Detail & Related papers (2024-04-15T10:12:33Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Bayesian Semi-structured Subspace Inference [0.0]
Semi-structured regression models enable the joint modeling of interpretable structured and complex unstructured feature effects.
We present a Bayesian approximation for semi-structured regression models using subspace inference.
Our approach exhibits competitive predictive performance across simulated and real-world datasets.
arXiv Detail & Related papers (2024-01-23T18:15:58Z) - Bayesian Modeling of Language-Evoked Event-Related Potentials [0.0]
We present a Bayesian approach to analyzing event-related potentials using as an example data from an experiment which relates word surprisal and neural response.
Our model is able to estimate the effect of word surprisal on most components of the event-related potential and provides a richer description of the data.
arXiv Detail & Related papers (2022-07-07T15:58:17Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - A Topological-Framework to Improve Analysis of Machine Learning Model
Performance [5.3893373617126565]
We propose a framework for evaluating machine learning models in which a dataset is treated as a "space" on which a model operates.
We describe a topological data structure, presheaves, which offer a convenient way to store and analyze model performance between different subpopulations.
arXiv Detail & Related papers (2021-07-09T23:11:13Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Convex Parameter Recovery for Interacting Marked Processes [9.578874709168561]
The probability of an event to occur in a location may be influenced by past events at this and other locations.
We do not restrict interactions to be positive or decaying over time as it is commonly adopted.
In our modeling, prior knowledge is incorporated by allowing general convex constraints on model parameters.
arXiv Detail & Related papers (2020-03-29T03:23:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.