NOMAD: Nonlinear Manifold Decoders for Operator Learning
- URL: http://arxiv.org/abs/2206.03551v1
- Date: Tue, 7 Jun 2022 19:52:44 GMT
- Title: NOMAD: Nonlinear Manifold Decoders for Operator Learning
- Authors: Jacob H. Seidman, Georgios Kissas, Paris Perdikaris, George J. Pappas
- Abstract summary: Supervised learning in function spaces is an emerging area of machine learning research.
We show NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces.
- Score: 17.812064311297117
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Supervised learning in function spaces is an emerging area of machine
learning research with applications to the prediction of complex physical
systems such as fluid flows, solid mechanics, and climate modeling. By directly
learning maps (operators) between infinite dimensional function spaces, these
models are able to learn discretization invariant representations of target
functions. A common approach is to represent such target functions as linear
combinations of basis elements learned from data. However, there are simple
scenarios where, even though the target functions form a low dimensional
submanifold, a very large number of basis elements is needed for an accurate
linear representation. Here we present NOMAD, a novel operator learning
framework with a nonlinear decoder map capable of learning finite dimensional
representations of nonlinear submanifolds in function spaces. We show this
method is able to accurately learn low dimensional representations of solution
manifolds to partial differential equations while outperforming linear models
of larger size. Additionally, we compare to state-of-the-art operator learning
methods on a complex fluid dynamics benchmark and achieve competitive
performance with a significantly smaller model size and training cost.
Related papers
- Functional Autoencoder for Smoothing and Representation Learning [0.0]
We propose to learn the nonlinear representations of functional data using neural network autoencoders designed to process data in the form it is usually collected without the need of preprocessing.
We design the encoder to employ a projection layer computing the weighted inner product of the functional data and functional weights over the observed timestamp, and the decoder to apply a recovery layer that maps the finite-dimensional vector extracted from the functional data back to functional space.
arXiv Detail & Related papers (2024-01-17T08:33:25Z) - Canonical normalizing flows for manifold learning [14.377143992248222]
We propose a canonical manifold learning flow method, where a novel objective enforces the transformation matrix to have few prominent and non-degenerate basis functions.
Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data.
arXiv Detail & Related papers (2023-10-19T13:48:05Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Functional Nonlinear Learning [0.0]
We propose a functional nonlinear learning (FunNoL) method to represent multivariate functional data in a lower-dimensional feature space.
We show that FunNoL provides satisfactory curve classification and reconstruction regardless of data sparsity.
arXiv Detail & Related papers (2022-06-22T23:47:45Z) - Graph Embedding via High Dimensional Model Representation for
Hyperspectral Images [9.228929858529678]
Learning the manifold structure of remote sensing images is of paramount relevance for modeling and understanding processes.
Manor learning methods have shown excellent performance to deal with hyperspectral image (HSI) analysis.
A common assumption to deal with the problem is that the transformation between the high-dimensional input space and the (typically low) latent space is linear.
The proposed method is compared to manifold learning methods along with its linear counterparts and achieves promising performance in terms of classification accuracy of a representative set of hyperspectral images.
arXiv Detail & Related papers (2021-11-29T16:42:15Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Non-parametric Models for Non-negative Functions [48.7576911714538]
We provide the first model for non-negative functions from the same good linear models.
We prove that it admits a representer theorem and provide an efficient dual formulation for convex problems.
arXiv Detail & Related papers (2020-07-08T07:17:28Z) - FLAMBE: Structural Complexity and Representation Learning of Low Rank
MDPs [53.710405006523274]
This work focuses on the representation learning question: how can we learn such features?
Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem.
We develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models.
arXiv Detail & Related papers (2020-06-18T19:11:18Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.