Model Reduction for Nonlinear Systems by Balanced Truncation of State
and Gradient Covariance
- URL: http://arxiv.org/abs/2207.14387v4
- Date: Wed, 12 Apr 2023 22:50:04 GMT
- Title: Model Reduction for Nonlinear Systems by Balanced Truncation of State
and Gradient Covariance
- Authors: Samuel E. Otto, Alberto Padovan, Clarence W. Rowley
- Abstract summary: We find low-dimensional systems of coordinates for model reduction that balance adjoint-based information about the system's sensitivity with the variance of states along trajectories.
We demonstrate these techniques on a simple, yet challenging three-dimensional system and a nonlinear axisymmetric jet flow simulation with $105$ state variables.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Data-driven reduced-order models often fail to make accurate forecasts of
high-dimensional nonlinear dynamical systems that are sensitive along
coordinates with low-variance because such coordinates are often truncated,
e.g., by proper orthogonal decomposition, kernel principal component analysis,
and autoencoders. Such systems are encountered frequently in shear-dominated
fluid flows where non-normality plays a significant role in the growth of
disturbances. In order to address these issues, we employ ideas from active
subspaces to find low-dimensional systems of coordinates for model reduction
that balance adjoint-based information about the system's sensitivity with the
variance of states along trajectories. The resulting method, which we refer to
as covariance balancing reduction using adjoint snapshots (CoBRAS), is
analogous to balanced truncation with state and adjoint-based gradient
covariance matrices replacing the system Gramians and obeying the same key
transformation laws. Here, the extracted coordinates are associated with an
oblique projection that can be used to construct Petrov-Galerkin reduced-order
models. We provide an efficient snapshot-based computational method analogous
to balanced proper orthogonal decomposition. This also leads to the observation
that the reduced coordinates can be computed relying on inner products of state
and gradient samples alone, allowing us to find rich nonlinear coordinates by
replacing the inner product with a kernel function. In these coordinates,
reduced-order models can be learned using regression. We demonstrate these
techniques and compare to a variety of other methods on a simple, yet
challenging three-dimensional system and a nonlinear axisymmetric jet flow
simulation with $10^5$ state variables.
Related papers
- Proper Latent Decomposition [4.266376725904727]
We compute a reduced set of intrinsic coordinates (latent space) to accurately describe a flow with fewer degrees of freedom than the numerical discretization.
With this proposed numerical framework, we propose an algorithm to perform PLD on the manifold.
This work opens opportunities for analyzing autoencoders and latent spaces, nonlinear reduced-order modeling and scientific insights into the structure of high-dimensional data.
arXiv Detail & Related papers (2024-12-01T12:19:08Z) - On Learning Gaussian Multi-index Models with Gradient Flow [57.170617397894404]
We study gradient flow on the multi-index regression problem for high-dimensional Gaussian data.
We consider a two-timescale algorithm, whereby the low-dimensional link function is learnt with a non-parametric model infinitely faster than the subspace parametrizing the low-rank projection.
arXiv Detail & Related papers (2023-10-30T17:55:28Z) - Accurate Data-Driven Surrogates of Dynamical Systems for Forward
Propagation of Uncertainty [0.0]
collocation (SC) is a non-intrusive method of constructing surrogate models for uncertainty.
This work presents an alternative approach, where we apply the SC approximation over the dynamics of the model, rather than the solution.
We demonstrate that the SC-over-dynamics framework leads to smaller errors, both in terms of the approximated system trajectories as well as the model state distributions.
arXiv Detail & Related papers (2023-10-16T21:07:54Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - A graph convolutional autoencoder approach to model order reduction for
parametrized PDEs [0.8192907805418583]
The present work proposes a framework for nonlinear model order reduction based on a Graph Convolutional Autoencoder (GCA-ROM)
We develop a non-intrusive and data-driven nonlinear reduction approach, exploiting GNNs to encode the reduced manifold and enable fast evaluations of parametrized PDEs.
arXiv Detail & Related papers (2023-05-15T12:01:22Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Error-Correcting Neural Networks for Two-Dimensional Curvature
Computation in the Level-Set Method [0.0]
We present an error-neural-modeling-based strategy for approximating two-dimensional curvature in the level-set method.
Our main contribution is a redesigned hybrid solver that relies on numerical schemes to enable machine-learning operations on demand.
arXiv Detail & Related papers (2022-01-22T05:14:40Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.