High-order Differentiable Autoencoder for Nonlinear Model Reduction
- URL: http://arxiv.org/abs/2102.11026v1
- Date: Fri, 19 Feb 2021 02:30:14 GMT
- Title: High-order Differentiable Autoencoder for Nonlinear Model Reduction
- Authors: Siyuan Shen, Yang Yin, Tianjia Shao, He Wang, Chenfanfu Jiang, Lei
Lan, Kun Zhou
- Abstract summary: This paper provides a new avenue for exploiting deep neural networks to improve physics-based simulation.
We integrate the classic Lagrangian mechanics with a deep autoencoder to accelerate elastic simulation of deformable solids.
- Score: 29.296661974901976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper provides a new avenue for exploiting deep neural networks to
improve physics-based simulation. Specifically, we integrate the classic
Lagrangian mechanics with a deep autoencoder to accelerate elastic simulation
of deformable solids. Due to the inertia effect, the dynamic equilibrium cannot
be established without evaluating the second-order derivatives of the deep
autoencoder network. This is beyond the capability of off-the-shelf automatic
differentiation packages and algorithms, which mainly focus on the gradient
evaluation. Solving the nonlinear force equilibrium is even more challenging if
the standard Newton's method is to be used. This is because we need to compute
a third-order derivative of the network to obtain the variational Hessian. We
attack those difficulties by exploiting complex-step finite difference, coupled
with reverse automatic differentiation. This strategy allows us to enjoy the
convenience and accuracy of complex-step finite difference and in the meantime,
to deploy complex-value perturbations as collectively as possible to save
excessive network passes. With a GPU-based implementation, we are able to wield
deep autoencoders (e.g., $10+$ layers) with a relatively high-dimension latent
space in real-time. Along this pipeline, we also design a sampling network and
a weighting network to enable \emph{weight-varying} Cubature integration in
order to incorporate nonlinearity in the model reduction. We believe this work
will inspire and benefit future research efforts in nonlinearly reduced
physical simulation problems.
Related papers
- Scaling physics-informed hard constraints with mixture-of-experts [0.0]
We develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE)
MoE imposes the constraint over smaller domains, each of which is solved by an "expert" through differentiable optimization.
Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting.
arXiv Detail & Related papers (2024-02-20T22:45:00Z) - Learning Nonlinear Projections for Reduced-Order Modeling of Dynamical
Systems using Constrained Autoencoders [0.0]
We introduce a class of nonlinear projections described by constrained autoencoder neural networks in which both the manifold and the projection fibers are learned from data.
Our architecture uses invertible activation functions and biorthogonal weight matrices to ensure that the encoder is a left inverse of the decoder.
We also introduce new dynamics-aware cost functions that promote learning of oblique projection fibers that account for fast dynamics and nonnormality.
arXiv Detail & Related papers (2023-07-28T04:01:48Z) - Data-driven Nonlinear Parametric Model Order Reduction Framework using
Deep Hierarchical Variational Autoencoder [5.521324490427243]
Data-driven parametric model order reduction (MOR) method using a deep artificial neural network is proposed.
LSH-VAE is capable of performing nonlinear MOR for the parametric of a nonlinear dynamic system with a significant number of degrees of freedom.
arXiv Detail & Related papers (2023-07-10T02:44:53Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Multi-fidelity Generative Deep Learning Turbulent Flows [0.0]
In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost.
In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields.
The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation.
arXiv Detail & Related papers (2020-06-08T16:37:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.