Non-intrusive reduced order modeling of natural convection in porous
media using convolutional autoencoders: comparison with linear subspace
techniques
- URL: http://arxiv.org/abs/2107.11460v1
- Date: Fri, 23 Jul 2021 20:58:15 GMT
- Title: Non-intrusive reduced order modeling of natural convection in porous
media using convolutional autoencoders: comparison with linear subspace
techniques
- Authors: T. Kadeethum, F. Ballarin, Y. Cho, D. O'Malley, H. Yoon, N. Bouklas
- Abstract summary: Natural convection in porous media is a highly nonlinear multiphysical problem relevant to many engineering applications.
We present a non-intrusive reduced order model of natural convection in porous media employing deep convolutional autoencoders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural convection in porous media is a highly nonlinear multiphysical
problem relevant to many engineering applications (e.g., the process of
$\mathrm{CO_2}$ sequestration). Here, we present a non-intrusive reduced order
model of natural convection in porous media employing deep convolutional
autoencoders for the compression and reconstruction and either radial basis
function (RBF) interpolation or artificial neural networks (ANNs) for mapping
parameters of partial differential equations (PDEs) on the corresponding
nonlinear manifolds. To benchmark our approach, we also describe linear
compression and reconstruction processes relying on proper orthogonal
decomposition (POD) and ANNs. We present comprehensive comparisons among
different models through three benchmark problems. The reduced order models,
linear and nonlinear approaches, are much faster than the finite element model,
obtaining a maximum speed-up of $7 \times 10^{6}$ because our framework is not
bound by the Courant-Friedrichs-Lewy condition; hence, it could deliver
quantities of interest at any given time contrary to the finite element model.
Our model's accuracy still lies within a mean squared error of 0.07 (two-order
of magnitude lower than the maximum value of the finite element results) in the
worst-case scenario. We illustrate that, in specific settings, the nonlinear
approach outperforms its linear counterpart and vice versa. We hypothesize that
a visual comparison between principal component analysis (PCA) or t-Distributed
Stochastic Neighbor Embedding (t-SNE) could indicate which method will perform
better prior to employing any specific compression strategy.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - A graph convolutional autoencoder approach to model order reduction for
parametrized PDEs [0.8192907805418583]
The present work proposes a framework for nonlinear model order reduction based on a Graph Convolutional Autoencoder (GCA-ROM)
We develop a non-intrusive and data-driven nonlinear reduction approach, exploiting GNNs to encode the reduced manifold and enable fast evaluations of parametrized PDEs.
arXiv Detail & Related papers (2023-05-15T12:01:22Z) - Nonlinear proper orthogonal decomposition for convection-dominated flows [0.0]
We propose an end-to-end Galerkin-free model combining autoencoders with long short-term memory networks for dynamics.
Our approach not only improves the accuracy, but also significantly reduces the computational cost of training and testing.
arXiv Detail & Related papers (2021-10-15T18:05:34Z) - Non-intrusive reduced order modeling of poroelasticity of heterogeneous
media based on a discontinuous Galerkin approximation [0.0]
We present a non-intrusive model reduction framework for linear poroelasticity problems in heterogeneous porous media.
We utilize the interior penalty discontinuous Galerkin (DG) method as a full order solver to handle discontinuity.
We show that our framework provides reasonable approximations of the DG solution, but it is significantly faster.
arXiv Detail & Related papers (2021-01-28T04:21:06Z) - Estimation of Switched Markov Polynomial NARX models [75.91002178647165]
We identify a class of models for hybrid dynamical systems characterized by nonlinear autoregressive (NARX) components.
The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
arXiv Detail & Related papers (2020-09-29T15:00:47Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - A comprehensive deep learning-based approach to reduced order modeling
of nonlinear time-dependent parametrized PDEs [0.0]
We show how to construct a DL-ROM for both linear and nonlinear time-dependent parametrized PDEs.
Numerical results indicate that DL-ROMs whose dimension is equal to the intrinsic dimensionality of the PDE solutions manifold are able to approximate the solution of parametrized PDEs.
arXiv Detail & Related papers (2020-01-12T21:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.