Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements
- URL: http://arxiv.org/abs/2111.08693v1
- Date: Mon, 15 Nov 2021 09:08:27 GMT
- Title: Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements
- Authors: Ma\"eliss Jallais (PARIETAL), Pedro Rodrigues (PARIETAL), Alexandre
Gramfort (PARIETAL), Demian Wassermann (PARIETAL)
- Abstract summary: characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective characterisation of the brain grey matter cytoarchitecture with
quantitative sensitivity to soma density and volume remains an unsolved
challenge in diffusion MRI (dMRI). Solving the problem of relating the dMRI
signal with cytoarchitectural characteristics calls for the definition of a
mathematical model that describes brain tissue via a handful of
physiologically-relevant parameters and an algorithm for inverting the model.
To address this issue, we propose a new forward model, specifically a new
system of equations, requiring a few relatively sparse b-shells. We then apply
modern tools from Bayesian analysis known as likelihood-free inference (LFI) to
invert our proposed model. As opposed to other approaches from the literature,
our algorithm yields not only an estimation of the parameter vector $\theta$
that best describes a given observed data point $x_0$, but also a full
posterior distribution $p(\theta|x_0)$ over the parameter space. This enables a
richer description of the model inversion, providing indicators such as
credible intervals for the estimated parameters and a complete characterization
of the parameter regions where the model may present indeterminacies. We
approximate the posterior distribution using deep neural density estimators,
known as normalizing flows, and fit them using a set of repeated simulations
from the forward model. We validate our approach on simulations using dmipy and
then apply the whole pipeline on two publicly available datasets.
Related papers
- A variational neural Bayes framework for inference on intractable posterior distributions [1.0801976288811024]
Posterior distributions of model parameters are efficiently obtained by feeding observed data into a trained neural network.
We show theoretically that our posteriors converge to the true posteriors in Kullback-Leibler divergence.
arXiv Detail & Related papers (2024-04-16T20:40:15Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - $μ$GUIDE: a framework for quantitative imaging via generalized uncertainty-driven inference using deep learning [0.0]
$mu$GUIDE estimates posterior distributions of tissue microstructure parameters from any given biophysical model or MRI signal representation.
The obtained posterior distributions allow to highlight degeneracies present in the model definition and quantify the uncertainty and ambiguity of the estimated parameters.
arXiv Detail & Related papers (2023-12-28T13:59:43Z) - A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty [1.8416014644193066]
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient.
arXiv Detail & Related papers (2023-07-05T16:53:31Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Generative models and Bayesian inversion using Laplace approximation [0.3670422696827525]
Recently, inverse problems were solved using generative models as highly informative priors.
We show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model.
arXiv Detail & Related papers (2022-03-15T10:05:43Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Probabilistic Inference of Simulation Parameters via Parallel
Differentiable Simulation [34.30381620584878]
To accurately reproduce measurements from the real world, simulators need to have an adequate model of the physical system.
We address the latter problem of estimating parameters through a Bayesian inference approach.
We leverage GPU code generation and differentiable simulation to evaluate the likelihood and its gradient for many particles in parallel.
arXiv Detail & Related papers (2021-09-18T03:05:44Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.