Learning in latent spaces improves the predictive accuracy of deep
neural operators
- URL: http://arxiv.org/abs/2304.07599v1
- Date: Sat, 15 Apr 2023 17:13:09 GMT
- Title: Learning in latent spaces improves the predictive accuracy of deep
neural operators
- Authors: Katiana Kontolati, Somdatta Goswami, George Em Karniadakis, Michael D.
Shields
- Abstract summary: L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Operator regression provides a powerful means of constructing
discretization-invariant emulators for partial-differential equations (PDEs)
describing physical systems. Neural operators specifically employ deep neural
networks to approximate mappings between infinite-dimensional Banach spaces. As
data-driven models, neural operators require the generation of labeled
observations, which in cases of complex high-fidelity models result in
high-dimensional datasets containing redundant and noisy features, which can
hinder gradient-based optimization. Mapping these high-dimensional datasets to
a low-dimensional latent space of salient features can make it easier to work
with the data and also enhance learning. In this work, we investigate the
latent deep operator network (L-DeepONet), an extension of standard DeepONet,
which leverages latent representations of high-dimensional PDE input and output
functions identified with suitable autoencoders. We illustrate that L-DeepONet
outperforms the standard approach in terms of both accuracy and computational
efficiency across diverse time-dependent PDEs, e.g., modeling the growth of
fracture in brittle materials, convective fluid flows, and large-scale
atmospheric flows exhibiting multiscale dynamical features.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Separable DeepONet: Breaking the Curse of Dimensionality in Physics-Informed Machine Learning [0.0]
In the absence of labeled datasets, we utilize the PDE residual loss to learn the physical system, an approach known as physics-informed DeepONet.
This method faces significant computational challenges, primarily due to the curse of dimensionality, as the computational cost increases exponentially with finer discretization.
We introduce the Separable DeepONet framework to address these challenges and improve scalability for high-dimensional PDEs.
arXiv Detail & Related papers (2024-07-21T16:33:56Z) - RandONet: Shallow-Networks with Random Projections for learning linear and nonlinear operators [0.0]
We present Random Projection-based Operator Networks (RandONets)
RandONets are shallow networks with random projections that learn linear and nonlinear operators.
We show, that for this particular task, RandONets outperform, both in terms of numerical approximation accuracy and computational cost, the vanilla" DeepOnets.
arXiv Detail & Related papers (2024-06-08T13:20:48Z) - Learning time-dependent PDE via graph neural networks and deep operator
network for robust accuracy on irregular grids [14.93012615797081]
GraphDeepONet is an autoregressive model based on graph neural networks (GNNs)
It exhibits robust accuracy in predicting solutions compared to existing GNN-based PDE solver models.
Unlike traditional DeepONet and its variants, GraphDeepONet enables time extrapolation for time-dependent PDE solutions.
arXiv Detail & Related papers (2024-02-13T03:14:32Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Reliable extrapolation of deep neural operators informed by physics or
sparse observations [2.887258133992338]
Deep neural operators can learn nonlinear mappings between infinite-dimensional function spaces via deep neural networks.
DeepONets provide a new simulation paradigm in science and engineering.
We propose five reliable learning methods that guarantee a safe prediction under extrapolation.
arXiv Detail & Related papers (2022-12-13T03:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.