Non-intrusive Nonlinear Model Reduction via Machine Learning
Approximations to Low-dimensional Operators
- URL: http://arxiv.org/abs/2106.09658v1
- Date: Thu, 17 Jun 2021 17:04:42 GMT
- Title: Non-intrusive Nonlinear Model Reduction via Machine Learning
Approximations to Low-dimensional Operators
- Authors: Zhe Bai, Liqian Peng
- Abstract summary: We propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner.
The approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques.
In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to $1000times$ reduction in run time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although projection-based reduced-order models (ROMs) for parameterized
nonlinear dynamical systems have demonstrated exciting results across a range
of applications, their broad adoption has been limited by their intrusivity:
implementing such a reduced-order model typically requires significant
modifications to the underlying simulation code. To address this, we propose a
method that enables traditionally intrusive reduced-order models to be
accurately approximated in a non-intrusive manner. Specifically, the approach
approximates the low-dimensional operators associated with projection-based
reduced-order models (ROMs) using modern machine-learning regression
techniques. The only requirement of the simulation code is the ability to
export the velocity given the state and parameters as this functionality is
used to train the approximated low-dimensional operators. In addition to
enabling nonintrusivity, we demonstrate that the approach also leads to very
low computational complexity, achieving up to $1000\times$ reduction in run
time. We demonstrate the effectiveness of the proposed technique on two types
of PDEs.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders [0.0]
We introduce a class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data.
Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder.
We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality.
arXiv Detail & Related papers (2023-07-28T04:01:48Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Continuous Methods : Adaptively intrusive reduced order model closure [0.0]
We propose a novel ROM correction approach based on a time-continuous memory formulation.
Our proposed method provides a high level of accuracy while retaining the low computational costs.
arXiv Detail & Related papers (2022-11-30T13:55:34Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Non-linear manifold ROM with Convolutional Autoencoders and Reduced
Over-Collocation method [0.0]
Non-affine parametric dependencies, nonlinearities and advection-dominated regimes of the model of interest can result in a slow Kolmogorov n-width decay.
We implement the non-linear manifold method introduced by Carlberg et al [37] with hyper-reduction achieved through reduced over-collocation and teacher-student training of a reduced decoder.
We test the methodology on a 2d non-linear conservation law and a 2d shallow water models, and compare the results obtained with a purely data-driven method for which the dynamics is evolved in time with a long-short term memory network
arXiv Detail & Related papers (2022-03-01T11:16:50Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.