Convergence of weak-SINDy Surrogate Models
- URL: http://arxiv.org/abs/2209.15573v3
- Date: Fri, 12 Jan 2024 02:23:02 GMT
- Title: Convergence of weak-SINDy Surrogate Models
- Authors: Benjamin Russo and M. Paul Laiu
- Abstract summary: We give an in-depth error analysis for surrogate models generated by a variant of the Sparse Identification of Dynamics (SINDy) method.
As an application, we discuss the use of a combination of weak-SINDy surrogate modeling and proper decomposition (POD) to build a surrogate model for partial differential equations (PDEs)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we give an in-depth error analysis for surrogate models
generated by a variant of the Sparse Identification of Nonlinear Dynamics
(SINDy) method. We start with an overview of a variety of non-linear system
identification techniques, namely, SINDy, weak-SINDy, and the occupation kernel
method. Under the assumption that the dynamics are a finite linear combination
of a set of basis functions, these methods establish a matrix equation to
recover coefficients. We illuminate the structural similarities between these
techniques and establish a projection property for the weak-SINDy technique.
Following the overview, we analyze the error of surrogate models generated by a
simplified version of weak-SINDy. In particular, under the assumption of
boundedness of a composition operator given by the solution, we show that (i)
the surrogate dynamics converges towards the true dynamics and (ii) the
solution of the surrogate model is reasonably close to the true solution.
Finally, as an application, we discuss the use of a combination of weak-SINDy
surrogate modeling and proper orthogonal decomposition (POD) to build a
surrogate model for partial differential equations (PDEs).
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - An Orthogonal Polynomial Kernel-Based Machine Learning Model for
Differential-Algebraic Equations [0.24578723416255746]
We present a novel approach to solving general DAEs in an operator format by establishing connections between the LS-SVR machine learning model, weighted residual methods, and Legendres.
To assess the effectiveness of our proposed method, we conduct simulations involving various DAE scenarios, such as nonlinear systems, fractional-order derivatives, integro-differential, and partial DAEs.
arXiv Detail & Related papers (2024-01-25T18:37:17Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Non-intrusive surrogate modelling using sparse random features with
applications in crashworthiness analysis [4.521832548328702]
A novel approach of using Sparse Random Features for surrogate modelling in combination with self-supervised dimensionality reduction is described.
The results show a superiority of the here described approach over state of the art surrogate modelling techniques, Polynomial Chaos Expansions and Neural Networks.
arXiv Detail & Related papers (2022-12-30T01:29:21Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Data-driven Control of Agent-based Models: an Equation/Variable-free
Machine Learning Approach [0.0]
We present an Equation/Variable free machine learning (EVFML) framework for the control of the collective dynamics of complex/multiscale systems.
The proposed implementation consists of three steps: (A) from high-dimensional agent-based simulations, machine learning (in particular, non-linear manifold learning (DMs))
We exploit the Equation-free approach to perform numerical bifurcation analysis of the emergent dynamics.
We design data-driven embedded wash-out controllers that drive the agent-based simulators to their intrinsic, imprecisely known, emergent open-loop unstable steady-states.
arXiv Detail & Related papers (2022-07-12T18:16:22Z) - Surrogate Modeling for Physical Systems with Preserved Properties and
Adjustable Tradeoffs [0.0]
We present a model-based and a data-driven strategy to generate surrogate models.
The latter generates interpretable surrogate models by fitting artificial relations to a presupposed topological structure.
Our framework is compatible with various spatial discretization schemes for distributed parameter models.
arXiv Detail & Related papers (2022-02-02T17:07:02Z) - Hessian Eigenspectra of More Realistic Nonlinear Models [73.31363313577941]
We make a emphprecise characterization of the Hessian eigenspectra for a broad family of nonlinear models.
Our analysis takes a step forward to identify the origin of many striking features observed in more complex machine learning models.
arXiv Detail & Related papers (2021-03-02T06:59:52Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Operator inference for non-intrusive model reduction of systems with
non-polynomial nonlinear terms [6.806310449963198]
This work presents a non-intrusive model reduction method to learn low-dimensional models of dynamical systems with non-polynomial nonlinear terms that are spatially local.
The proposed approach requires only the non-polynomial terms in analytic form and learns the rest of the dynamics from snapshots computed with a potentially black-box full-model solver.
The proposed method is demonstrated on three problems governed by partial differential equations, namely the diffusion-reaction Chafee-Infante model, a tubular reactor model for reactive flows, and a batch-chromatography model that describes a chemical separation process.
arXiv Detail & Related papers (2020-02-22T16:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.