Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds
- URL: http://arxiv.org/abs/2305.15490v2
- Date: Thu, 24 Aug 2023 15:08:50 GMT
- Title: Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds
- Authors: Harsh Sharma, Hongliang Mu, Patrick Buchfink, Rudy Geelen, Silke Glas,
Boris Kramer
- Abstract summary: We present two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems.
The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality.
- Score: 0.559239450391449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents two novel approaches for the symplectic model reduction of
high-dimensional Hamiltonian systems using data-driven quadratic manifolds.
Classical symplectic model reduction approaches employ linear symplectic
subspaces for representing the high-dimensional system states in a
reduced-dimensional coordinate system. While these approximations respect the
symplectic nature of Hamiltonian systems, linear basis approximations can
suffer from slowly decaying Kolmogorov $N$-width, especially in wave-type
problems, which then requires a large basis size. We propose two different
model reduction methods based on recently developed quadratic manifolds, each
presenting its own advantages and limitations. The addition of quadratic terms
to the state approximation, which sits at the heart of the proposed
methodologies, enables us to better represent intrinsic low-dimensionality in
the problem at hand. Both approaches are effective for issuing predictions in
settings well outside the range of their training data while providing more
accurate solutions than the linear symplectic reduced-order models.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Model-Based Reinforcement Learning via Stochastic Hybrid Models [39.83837705993256]
This paper adopts a hybrid-system view of nonlinear modeling and control.
We consider a sequence modeling paradigm that captures the temporal structure of the data.
We show that these time-series models naturally admit a closed-loop extension that we use to extract local feedback controllers.
arXiv Detail & Related papers (2021-11-11T14:05:46Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Convolutional Autoencoders for Reduced-Order Modeling [0.0]
We create and train convolutional autoencoders that perform nonlinear dimension reduction for the wave and Kuramoto- Shivasinsky equations.
We present training methods independent of full-order model samples and use the manifold least-squares Petrov-Galerkin projection method to define a reduced-order model.
arXiv Detail & Related papers (2021-08-27T18:37:23Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Joint Dimensionality Reduction for Separable Embedding Estimation [43.22422640265388]
Low-dimensional embeddings for data from disparate sources play critical roles in machine learning, multimedia information retrieval, and bioinformatics.
We propose a supervised dimensionality reduction method that learns linear embeddings jointly for two feature vectors representing data of different modalities or data from distinct types of entities.
Our approach compares favorably against other dimensionality reduction methods, and against a state-of-the-art method of bilinear regression for predicting gene-disease associations.
arXiv Detail & Related papers (2021-01-14T08:48:37Z) - Efficient nonlinear manifold reduced order model [0.19116784879310023]
nonlinear manifold ROM (NM-ROM) can better approximate high-fidelity model solutions with a smaller latent space dimension than the LS-ROMs.
Results show that neural networks can learn a more efficient latent space representation on advection-dominated data from 2D Burgers' equations with a high Reynolds number.
arXiv Detail & Related papers (2020-11-13T18:46:21Z) - Non-parametric Models for Non-negative Functions [48.7576911714538]
We provide the first model for non-negative functions from the same good linear models.
We prove that it admits a representer theorem and provide an efficient dual formulation for convex problems.
arXiv Detail & Related papers (2020-07-08T07:17:28Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.