Dynamical System Identification, Model Selection and Model Uncertainty Quantification by Bayesian Inference
- URL: http://arxiv.org/abs/2401.16943v2
- Date: Mon, 22 Jul 2024 07:51:59 GMT
- Title: Dynamical System Identification, Model Selection and Model Uncertainty Quantification by Bayesian Inference
- Authors: Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel, Markus Quade,
- Abstract summary: This study presents a Bayesian maximum textitaposteriori (MAP) framework for dynamical system identification from time-series data.
- Score: 0.8388591755871735
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study presents a Bayesian maximum \textit{a~posteriori} (MAP) framework for dynamical system identification from time-series data. This is shown to be equivalent to a generalized Tikhonov regularization, providing a rational justification for the choice of the residual and regularization terms, respectively, from the negative logarithms of the likelihood and prior distributions. In addition to the estimation of model coefficients, the Bayesian interpretation gives access to the full apparatus for Bayesian inference, including the ranking of models, the quantification of model uncertainties and the estimation of unknown (nuisance) hyperparameters. Two Bayesian algorithms, joint maximum \textit{a~posteriori} (JMAP) and variational Bayesian approximation (VBA), are compared to the {LASSO, ridge regression and SINDy algorithms for sparse} regression, by application to several dynamical systems with added {Gaussian or Laplace} noise. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives Gaussian posterior and evidence distributions, in which the numerator terms can be expressed in terms of the Mahalanobis distance or ``Gaussian norm'' $||\vec{y}-\hat{\vec{y}}||^2_{M^{-1}} = (\vec{y}-\hat{\vec{y}})^\top {M^{-1}} (\vec{y}-\hat{\vec{y}})$, where $\vec{y}$ is a vector variable, $\hat{\vec{y}}$ is its estimator and $M$ is the covariance matrix. The posterior Gaussian norm is shown to provide a robust metric for quantitative model selection {for the different systems and noise models examined.}
Related papers
- Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Bayesian Approach to Linear Bayesian Networks [3.8711489380602804]
The proposed approach iteratively estimates each element of the topological ordering from backward and its parent using the inverse of a partial covariance matrix.
The proposed method is demonstrated to outperform state-of-the-art frequentist approaches, such as the BHLSM, LISTEN, and TD algorithms in synthetic data.
arXiv Detail & Related papers (2023-11-27T08:10:53Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Characteristic Function of the Tsallis $q$-Gaussian and Its Applications
in Measurement and Metrology [0.0]
The Tsallis $q$-Gaussian distribution is a powerful generalization of the standard Gaussian distribution.
This paper presents the characteristic function of a linear combination of independent $q$-Gaussian random variables.
It provides an alternative computational procedure to the Monte Carlo method for uncertainty analysis.
arXiv Detail & Related papers (2023-03-15T13:42:35Z) - General Gaussian Noise Mechanisms and Their Optimality for Unbiased Mean
Estimation [58.03500081540042]
A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it.
We show that for every input dataset, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at least as much error.
arXiv Detail & Related papers (2023-01-31T18:47:42Z) - Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction
for Uncertainty Quantification [4.728311759896569]
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning.
It incorporates adaptively flexible distribution prediction of $mathbbP(mathbfy|mathbfX=x)$ in regression tasks.
On extensive regression tasks from UCI datasets, we show that EMQ achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-26T11:45:32Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Probabilistic semi-nonnegative matrix factorization: a Skellam-based
framework [0.7310043452300736]
We present a new probabilistic model to address semi-nonnegative matrix factorization (SNMF), called Skellam-SNMF.
It is a hierarchical generative model consisting of prior components, Skellam-distributed hidden variables and observed data.
Two inference algorithms are derived: Expectation-Maximization (EM) algorithm for maximum empha posteriori estimation and Vari Bayes EM (VBEM) for full Bayesian inference.
arXiv Detail & Related papers (2021-07-07T15:56:22Z) - Estimating Stochastic Linear Combination of Non-linear Regressions
Efficiently and Scalably [23.372021234032363]
We show that when the sub-sample sizes are large then the estimation errors will be sacrificed by too much.
To the best of our knowledge, this is the first work that and guarantees for the lineartext+Stochasticity model.
arXiv Detail & Related papers (2020-10-19T07:15:38Z) - The Generalized Lasso with Nonlinear Observations and Generative Priors [63.541900026673055]
We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models.
We show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property.
arXiv Detail & Related papers (2020-06-22T16:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.