Novel and flexible parameter estimation methods for data-consistent
inversion in mechanistic modeling
- URL: http://arxiv.org/abs/2009.08267v3
- Date: Wed, 27 Sep 2023 16:45:02 GMT
- Title: Novel and flexible parameter estimation methods for data-consistent
inversion in mechanistic modeling
- Authors: Timothy Rumbell, Jaimit Parikh, James Kozloski, and Viatcheslav Gurev
- Abstract summary: We introduce new methods to solve inverse problems (SIP) based on rejection sampling, Markov chain Monte Carlo, and generative adversarial networks (GANs)
To overcome limitations of SIP, we reformulate SIP based on constrained optimization and present a novel GAN to solve the constrained optimization problem.
- Score: 0.13635858675752988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictions for physical systems often rely upon knowledge acquired from
ensembles of entities, e.g., ensembles of cells in biological sciences. For
qualitative and quantitative analysis, these ensembles are simulated with
parametric families of mechanistic models (MM). Two classes of methodologies,
based on Bayesian inference and Population of Models, currently prevail in
parameter estimation for physical systems. However, in Bayesian analysis,
uninformative priors for MM parameters introduce undesirable bias. Here, we
propose how to infer parameters within the framework of stochastic inverse
problems (SIP), also termed data-consistent inversion, wherein the prior
targets only uncertainties that arise due to MM non-invertibility. To
demonstrate, we introduce new methods to solve SIP based on rejection sampling,
Markov chain Monte Carlo, and generative adversarial networks (GANs). In
addition, to overcome limitations of SIP, we reformulate SIP based on
constrained optimization and present a novel GAN to solve the constrained
optimization problem.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Proximal Interacting Particle Langevin Algorithms [0.0]
We introduce Proximal Interacting Particle Langevin Algorithms (PIPLA) for inference and learning in latent variable models.
We propose several variants within the novel proximal IPLA family, tailored to the problem of estimating parameters in a non-differentiable statistical model.
Our theory and experiments together show that PIPLA family can be the de facto choice for parameter estimation problems in latent variable models for non-differentiable models.
arXiv Detail & Related papers (2024-06-20T13:16:41Z) - Variational Inference of Parameters in Opinion Dynamics Models [9.51311391391997]
This work uses variational inference to estimate the parameters of an opinion dynamics ABM.
We transform the inference process into an optimization problem suitable for automatic differentiation.
Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods.
arXiv Detail & Related papers (2024-03-08T14:45:18Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Simulation-based Inference for Model Parameterization on Analog
Neuromorphic Hardware [1.843681725117436]
This study investigates the suitability of the sequential neural posterior estimation algorithm for parameterizing a multi-compartmental neuron model emulated on the BrainScaleS-2 system.
The SNPE algorithm belongs to the class of simulation-based inference methods and estimates the posterior distribution of the model parameters.
arXiv Detail & Related papers (2023-03-28T15:37:30Z) - Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks [25.051918587650636]
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
arXiv Detail & Related papers (2023-03-19T04:51:34Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Bayesian System ID: Optimal management of parameter, model, and
measurement uncertainty [0.0]
We evaluate the robustness of a probabilistic formulation of system identification (ID) to sparse, noisy, and indirect data.
We show that the log posterior has improved geometric properties compared with the objective function surfaces of traditional methods.
arXiv Detail & Related papers (2020-03-04T22:48:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.