Low-rank statistical finite elements for scalable model-data synthesis
- URL: http://arxiv.org/abs/2109.04757v1
- Date: Fri, 10 Sep 2021 09:51:43 GMT
- Title: Low-rank statistical finite elements for scalable model-data synthesis
- Authors: Connor Duffin, Edward Cripps, Thomas Stemler, Mark Girolami
- Abstract summary: statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
- Score: 0.8602553195689513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical learning additions to physically derived mathematical models are
gaining traction in the literature. A recent approach has been to augment the
underlying physics of the governing equations with data driven Bayesian
statistical methodology. Coined statFEM, the method acknowledges a priori model
misspecification, by embedding stochastic forcing within the governing
equations. Upon receipt of additional data, the posterior distribution of the
discretised finite element solution is updated using classical Bayesian
filtering techniques. The resultant posterior jointly quantifies uncertainty
associated with the ubiquitous problem of model misspecification and the data
intended to represent the true process of interest. Despite this appeal,
computational scalability is a challenge to statFEM's application to
high-dimensional problems typically experienced in physical and industrial
contexts. This article overcomes this hurdle by embedding a low-rank
approximation of the underlying dense covariance matrix, obtained from the
leading order modes of the full-rank alternative. Demonstrated on a series of
reaction-diffusion problems of increasing dimension, using experimental and
simulated data, the method reconstructs the sparsely observed data-generating
processes with minimal loss of information, in both posterior mean and the
variance, paving the way for further integration of physical and probabilistic
approaches to complex systems.
Related papers
- Fragment size density estimator for shrinkage-induced fracture based on a physics-informed neural network [0.0]
This paper presents a neural network (NN)-based solver for an integro-differential equation that models shrinkage-induced fragmentation.<n>The proposed method directly maps input parameters to the corresponding probability density function without numerically solving the governing equation.<n>It enables efficient evaluation of the density function in Monte Carlo simulations while maintaining accuracy comparable to or even exceeding that of conventional finite difference schemes.
arXiv Detail & Related papers (2025-07-15T23:33:05Z) - Testing Hypotheses of Covariate Effects on Topics of Discourse [0.0]
We introduce an approach to topic modelling that remains tractable in the face of large text corpora.<n>This is achieved by de-emphasizing the role of parameter estimation in an underlying probabilistic model.<n>We argue that the simple, non-parametric approach advocated here is faster, more interpretable, and enjoys better inferential justification than said generative models.
arXiv Detail & Related papers (2025-06-05T20:28:49Z) - Are Statistical Methods Obsolete in the Era of Deep Learning? [0.8329456268842228]
In the era of AI, neural networks have become increasingly popular for modeling, inference, and prediction.<n>With the proliferation of such deep learning models, a question arises: are leaner statistical methods still relevant?<n>We show that statistical methods are far from obsolete, especially when working with sparse and noisy observations.
arXiv Detail & Related papers (2025-05-27T20:11:21Z) - Unifying and extending Diffusion Models through PDEs for solving Inverse Problems [3.1225172236361165]
Diffusion models have emerged as powerful generative tools with applications in computer vision and scientific machine learning (SciML)
Traditionally, these models have been derived using principles of variational inference, denoising, statistical signal processing, and differential equations.
In this study we derive diffusion models using ideas from linear partial differential equations and demonstrate that this approach has several benefits.
arXiv Detail & Related papers (2025-04-10T04:07:36Z) - Principled model selection for stochastic dynamics [0.0]
PASTIS is a principled method combining likelihood-estimation statistics with extreme value theory to suppress superfluous parameters.
It reliably identifies minimal models, even with low sampling rates or measurement error.
It applies to partial differential equations, and applies to ecological networks and reaction-diffusion dynamics.
arXiv Detail & Related papers (2025-01-17T18:23:16Z) - Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Statistical Mechanics of Dynamical System Identification [3.1484174280822845]
We develop a statistical mechanical approach to analyze sparse equation discovery algorithms.
In this framework, statistical mechanics offers tools to analyze the interplay between complexity and fitness.
arXiv Detail & Related papers (2024-03-04T04:32:28Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Statistical Finite Elements via Langevin Dynamics [0.8602553195689513]
We make use of Langevin dynamics to solve the statFEM forward problem, studying the utility of the unadjusted Langevin algorithm (ULA)
ULA is a Metropolis-free Markov chain Monte Carlo sampler, to build a sample-based characterisation of this otherwise intractable measure.
We provide theoretical guarantees on sampler performance, demonstrating convergence, for both the prior and posterior, in the Kullback-Leibler divergence, and, in Wasserstein-2, with further results on the effect of preconditioning.
arXiv Detail & Related papers (2021-10-21T13:30:41Z) - Nonparametric Functional Analysis of Generalized Linear Models Under
Nonlinear Constraints [0.0]
This article introduces a novel nonparametric methodology for Generalized Linear Models.
It combines the strengths of the binary regression and latent variable formulations for categorical data.
It extends recently published parametric versions of the methodology and generalizes it.
arXiv Detail & Related papers (2021-10-11T04:49:59Z) - A Nonconvex Framework for Structured Dynamic Covariance Recovery [24.471814126358556]
We propose a flexible yet interpretable model for high-dimensional data with time-varying second order statistics.
Motivated by the literature, we quantify factorization and smooth temporal data.
We show that our approach outperforms existing baselines.
arXiv Detail & Related papers (2020-11-11T07:09:44Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.