Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method
- URL: http://arxiv.org/abs/2203.00360v1
- Date: Tue, 1 Mar 2022 11:16:50 GMT
- Title: Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method
- Authors: Francesco Romor and Giovanni Stabile and Gianluigi Rozza
- Abstract summary: Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-affine parametric dependencies, nonlinearities and advection-dominated
regimes of the model of interest can result in a slow Kolmogorov n-width decay,
which precludes the realization of efficient reduced-order models based on
linear subspace approximations. Among the possible solutions, there are purely
data-driven methods that leverage autoencoders and their variants to learn a
latent representation of the dynamical system, and then evolve it in time with
another architecture. Despite their success in many applications where standard
linear techniques fail, more has to be done to increase the interpretability of
the results, especially outside the training range and not in regimes
characterized by an abundance of data. Not to mention that none of the
knowledge on the physics of the model is exploited during the predictive phase.
In order to overcome these weaknesses, we implement the non-linear manifold
method introduced by Carlberg et al [37] with hyper-reduction achieved through
reduced over-collocation and teacher-student training of a reduced decoder. We
test the methodology on a 2d non-linear conservation law and a 2d shallow water
models, and compare the results obtained with a purely data-driven method for
which the dynamics is evolved in time with a long-short term memory network.
Related papers
- Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders [0.0]
We introduce a class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data.
Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder.
We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality.
arXiv Detail & Related papers (2023-07-28T04:01:48Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Bayesian Spline Learning for Equation Discovery of Nonlinear Dynamics
with Quantified Uncertainty [8.815974147041048]
We develop a novel framework to identify parsimonious governing equations of nonlinear (spatiotemporal) dynamics from sparse, noisy data with quantified uncertainty.
The proposed algorithm is evaluated on multiple nonlinear dynamical systems governed by canonical ordinary and partial differential equations.
arXiv Detail & Related papers (2022-10-14T20:37:36Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Convolutional Autoencoders for Reduced-Order Modeling [0.0]
We create and train convolutional autoencoders that perform nonlinear dimension reduction for the wave and Kuramoto- Shivasinsky equations.
We present training methods independent of full-order model samples and use the manifold least-squares Petrov-Galerkin projection method to define a reduced-order model.
arXiv Detail & Related papers (2021-08-27T18:37:23Z) - DynNet: Physics-based neural architecture design for linear and
nonlinear structural response modeling and prediction [2.572404739180802]
In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems.
The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces.
arXiv Detail & Related papers (2020-07-03T17:05:35Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.