Nonlinear energy-preserving model reduction with lifting transformations that quadratize the energy
- URL: http://arxiv.org/abs/2503.02273v1
- Date: Tue, 04 Mar 2025 04:42:37 GMT
- Title: Nonlinear energy-preserving model reduction with lifting transformations that quadratize the energy
- Authors: Harsh Sharma, Juan Diego Draxl Giannoni, Boris Kramer,
- Abstract summary: This work presents a nonlinear model reduction method that employs lifting variable transformations to derive structure-preserving quadratic reduced-order models for conservative PDEs with general nonlinearities.<n>The proposed lifting approach is competitive with the state-of-the-art structure-preserving hyper-reduction method in terms of both accuracy and computational efficiency in the online stage.
- Score: 0.6554326244334868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing model reduction techniques for high-dimensional models of conservative partial differential equations (PDEs) encounter computational bottlenecks when dealing with systems featuring non-polynomial nonlinearities. This work presents a nonlinear model reduction method that employs lifting variable transformations to derive structure-preserving quadratic reduced-order models for conservative PDEs with general nonlinearities. We present an energy-quadratization strategy that defines the auxiliary variable in terms of the nonlinear term in the energy expression to derive an equivalent quadratic lifted system with quadratic system energy. The proposed strategy combined with proper orthogonal decomposition model reduction yields quadratic reduced-order models that conserve the quadratized lifted energy exactly in high dimensions. We demonstrate the proposed model reduction approach on four nonlinear conservative PDEs: the one-dimensional wave equation with exponential nonlinearity, the two-dimensional sine-Gordon equation, the two-dimensional Klein-Gordon equation with parametric dependence, and the two-dimensional Klein-Gordon-Zakharov equations. The numerical results show that the proposed lifting approach is competitive with the state-of-the-art structure-preserving hyper-reduction method in terms of both accuracy and computational efficiency in the online stage while providing significant computational gains in the offline stage.
Related papers
- A Deep Learning approach for parametrized and time dependent Partial Differential Equations using Dimensionality Reduction and Neural ODEs [46.685771141109306]
We propose an autoregressive and data-driven method using the analogy with classical numerical solvers for time-dependent, parametric and (typically) nonlinear PDEs.<n>We show that by leveraging DR we can deliver not only more accurate predictions, but also a considerably lighter and faster Deep Learning model.
arXiv Detail & Related papers (2025-02-12T11:16:15Z) - Two-Point Deterministic Equivalence for Stochastic Gradient Dynamics in Linear Models [76.52307406752556]
We derive a novel deterministic equivalence for the two-point function of a random resolvent.<n>We give a unified derivation of the performance of a wide variety of high-dimensional trained linear models with gradient descent.
arXiv Detail & Related papers (2025-02-07T16:45:40Z) - On Disentangled Training for Nonlinear Transform in Learned Image Compression [59.66885464492666]
Learned image compression (LIC) has demonstrated superior rate-distortion (R-D) performance compared to traditional codecs.
Existing LIC methods overlook the slow convergence caused by compacting energy in learning nonlinear transforms.
We propose a linear auxiliary transform (AuxT) to disentangle energy compaction in training nonlinear transforms.
arXiv Detail & Related papers (2025-01-23T15:32:06Z) - FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.
A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.
In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces novel deep dynamical models designed to represent continuous-time sequences.<n>We train the model using maximum likelihood estimation with Markov chain Monte Carlo.<n> Experimental results on oscillating systems, videos and real-world state sequences (MuJoCo) demonstrate that our model with the learnable energy-based prior outperforms existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Harmonizing SO(3)-Equivariance with Neural Expressiveness: a Hybrid Deep Learning Framework Oriented to the Prediction of Electronic Structure Hamiltonian [36.13416266854978]
HarmoSE is a two-stage cascaded regression framework for deep learning.
First stage predicts Hamiltonians with abundant SO(3)-equivariant features extracted.
Second stage refines the first stage's output as a fine-grained prediction of Hamiltonians.
arXiv Detail & Related papers (2024-01-01T12:57:15Z) - Symplectic model reduction of Hamiltonian systems using data-driven
quadratic manifolds [0.559239450391449]
We present two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems.
The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality.
arXiv Detail & Related papers (2023-05-24T18:23:25Z) - Dynamical chaos in nonlinear Schr\"odinger models with subquadratic
power nonlinearity [137.6408511310322]
We deal with a class of nonlinear Schr"odinger lattices with random potential and subquadratic power nonlinearity.
We show that the spreading process is subdiffusive and has complex microscopic organization.
The limit of quadratic power nonlinearity is also discussed and shown to result in a delocalization border.
arXiv Detail & Related papers (2023-01-20T16:45:36Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Model Reduction of Swing Equations with Physics Informed PDE [3.3263205689999444]
This manuscript is the first step towards building a robust and efficient model reduction methodology to capture transient dynamics in a transmission level electric power system.
We show that, when properly coarse-grained, i.e. with the PDE coefficients and source terms extracted from a spatial convolution procedure of the respective discrete coefficients in the swing equations, the resulting PDE reproduces faithfully and efficiently the original swing dynamics.
arXiv Detail & Related papers (2021-10-26T22:46:20Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Reduced operator inference for nonlinear partial differential equations [2.389598109913753]
We present a new machine learning method that learns from data a surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE)
Our formulation generalizes the Operator Inference method previously developed in [B. Peherstorfer and K. Willcox, Data-driven operator inference for non-intrusive projection-based model reduction, Computer Methods in Applied Mechanics and Engineering, 306] for systems governed by ordinary differential equations.
arXiv Detail & Related papers (2021-01-29T21:50:20Z) - Operator inference for non-intrusive model reduction of systems with
non-polynomial nonlinear terms [6.806310449963198]
This work presents a non-intrusive model reduction method to learn low-dimensional models of dynamical systems with non-polynomial nonlinear terms that are spatially local.
The proposed approach requires only the non-polynomial terms in analytic form and learns the rest of the dynamics from snapshots computed with a potentially black-box full-model solver.
The proposed method is demonstrated on three problems governed by partial differential equations, namely the diffusion-reaction Chafee-Infante model, a tubular reactor model for reactive flows, and a batch-chromatography model that describes a chemical separation process.
arXiv Detail & Related papers (2020-02-22T16:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.