Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models
- URL: http://arxiv.org/abs/2107.09814v1
- Date: Wed, 21 Jul 2021 00:24:15 GMT
- Title: Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models
- Authors: Katiana Kontolati, Dimitrios Loukrezis, Ketson R. M. dos Santos,
Dimitrios G. Giovanis, Michael D. Shields
- Abstract summary: We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we introduce a manifold learning-based method for uncertainty
quantification (UQ) in systems describing complex spatiotemporal processes. Our
first objective is to identify the embedding of a set of high-dimensional data
representing quantities of interest of the computational or analytical model.
For this purpose, we employ Grassmannian diffusion maps, a two-step nonlinear
dimension reduction technique which allows us to reduce the dimensionality of
the data and identify meaningful geometric descriptions in a parsimonious and
inexpensive manner. Polynomial chaos expansion is then used to construct a
mapping between the stochastic input parameters and the diffusion coordinates
of the reduced space. An adaptive clustering technique is proposed to identify
an optimal number of clusters of points in the latent space. The similarity of
points allows us to construct a number of geometric harmonic emulators which
are finally utilized as a set of inexpensive pre-trained models to perform an
inverse map of realizations of latent features to the ambient space and thus
perform accurate out-of-sample predictions. Thus, the proposed method acts as
an encoder-decoder system which is able to automatically handle very
high-dimensional data while simultaneously operating successfully in the
small-data regime. The method is demonstrated on two benchmark problems and on
a system of advection-diffusion-reaction equations which model a first-order
chemical reaction between two species. In all test cases, the proposed method
is able to achieve highly accurate approximations which ultimately lead to the
significant acceleration of UQ tasks.
Related papers
- Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Polynomial Chaos Expansions on Principal Geodesic Grassmannian
Submanifolds for Surrogate Modeling and Uncertainty Quantification [0.41709348827585524]
We introduce a manifold learning-based surrogate modeling framework for uncertainty in high-dimensional systems.
We employ Principal Geodesic Analysis on the Grassmann manifold of the response to identify a set of disjoint principal geodesic submanifolds.
Polynomial chaos expansion is then used to construct a mapping between the random input parameters and the projection of the response.
arXiv Detail & Related papers (2024-01-30T02:13:02Z) - Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Diffeomorphic Mesh Deformation via Efficient Optimal Transport for Cortical Surface Reconstruction [40.73187749820041]
Mesh deformation plays a pivotal role in many 3D vision tasks including dynamic simulations, rendering, and reconstruction.
A prevalent approach in current deep learning is the set-based approach which measures the discrepancy between two surfaces by comparing two randomly sampled point-clouds from the two meshes with Chamfer pseudo-distance.
We propose a novel metric for learning mesh deformation, defined by sliced Wasserstein distance on meshes represented as probability measures that generalize the set-based approach.
arXiv Detail & Related papers (2023-05-27T19:10:19Z) - VI-DGP: A variational inference method with deep generative prior for
solving high-dimensional inverse problems [0.7734726150561089]
We propose a novel approximation method for estimating the high-dimensional posterior distribution.
This approach leverages a deep generative model to learn a prior model capable of generating spatially-varying parameters.
The proposed method can be fully implemented in an automatic differentiation manner.
arXiv Detail & Related papers (2023-02-22T06:48:10Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Deep-learning of Parametric Partial Differential Equations from Sparse
and Noisy Data [2.4431531175170362]
In this work, a new framework, which combines neural network, genetic algorithm and adaptive methods, is put forward to address all of these challenges simultaneously.
A trained neural network is utilized to calculate derivatives and generate a large amount of meta-data, which solves the problem of sparse noisy data.
Next, genetic algorithm is utilized to discover the form of PDEs and corresponding coefficients with an incomplete candidate library.
A two-step adaptive method is introduced to discover parametric PDEs with spatially- or temporally-varying coefficients.
arXiv Detail & Related papers (2020-05-16T09:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.