Variational Autoencoding Neural Operators
- URL: http://arxiv.org/abs/2302.10351v1
- Date: Mon, 20 Feb 2023 22:34:43 GMT
- Title: Variational Autoencoding Neural Operators
- Authors: Jacob H. Seidman, Georgios Kissas, George J. Pappas, Paris Perdikaris
- Abstract summary: Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
We present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders.
- Score: 17.812064311297117
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unsupervised learning with functional data is an emerging paradigm of machine
learning research with applications to computer vision, climate modeling and
physical systems. A natural way of modeling functional data is by learning
operators between infinite dimensional spaces, leading to discretization
invariant representations that scale independently of the sample grid
resolution. Here we present Variational Autoencoding Neural Operators (VANO), a
general strategy for making a large class of operator learning architectures
act as variational autoencoders. For this purpose, we provide a novel rigorous
mathematical formulation of the variational objective in function spaces for
training. VANO first maps an input function to a distribution over a latent
space using a parametric encoder and then decodes a sample from the latent
distribution to reconstruct the input, as in classic variational autoencoders.
We test VANO with different model set-ups and architecture choices for a
variety of benchmarks. We start from a simple Gaussian random field where we
can analytically track what the model learns and progressively transition to
more challenging benchmarks including modeling phase separation in
Cahn-Hilliard systems and real world satellite data for measuring Earth surface
deformation.
Related papers
- Autoencoders in Function Space [5.558412940088621]
This paper introduces function-space versions of the autoencoder (FAE) and variational autoencoder (FVAE)
The FAE objective is valid much more broadly, and can be straightforwardly applied to data governed by differential equations.
Pairing these objectives with neural operator architectures, which can be evaluated on any mesh, enables new applications of autoencoders to inpainting, superresolution, and generative modelling of scientific data.
arXiv Detail & Related papers (2024-08-02T16:13:51Z) - From system models to class models: An in-context learning paradigm [0.0]
We introduce a novel paradigm for system identification, addressing two primary tasks: one-step-ahead prediction and multi-step simulation.
We learn a meta model that represents a class of dynamical systems.
For one-step prediction, a GPT-like decoder-only architecture is utilized, whereas the simulation problem employs an encoder-decoder structure.
arXiv Detail & Related papers (2023-08-25T13:50:17Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks [0.0]
We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed.
We conducted numerical experiments for regression, classification, and feature selection tasks.
Our results demonstrate that the proposed model does not only yield an attractive prediction performance compared to the competitors.
arXiv Detail & Related papers (2023-07-11T09:54:30Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - RDA-INR: Riemannian Diffeomorphic Autoencoding via Implicit Neural Representations [3.9858496473361402]
In this work, we focus on a limitation of neural network-based atlas building and statistical latent modeling methods.
We overcome this limitation by designing a novel encoder based on resolution-independent implicit neural representations.
arXiv Detail & Related papers (2023-05-22T09:27:17Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Lorentz group equivariant autoencoders [6.858459233149096]
Lorentz group autoencoder (LGAE)
We develop an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $mathrmSO+(2,1)$, with a latent space living in the representations of the group.
We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics.
arXiv Detail & Related papers (2022-12-14T17:19:46Z) - On the Forward Invariance of Neural ODEs [92.07281135902922]
We propose a new method to ensure neural ordinary differential equations (ODEs) satisfy output specifications.
Our approach uses a class of control barrier functions to transform output specifications into constraints on the parameters and inputs of the learning system.
arXiv Detail & Related papers (2022-10-10T15:18:28Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.