Generative Adversarial Neural Operators
- URL: http://arxiv.org/abs/2205.03017v1
- Date: Fri, 6 May 2022 05:12:22 GMT
- Title: Generative Adversarial Neural Operators
- Authors: Md Ashiqur Rahman, Manuel A. Florez, Anima Anandkumar, Zachary E.
Ross, Kamyar Azizzadenesheli
- Abstract summary: We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces.
GANO consists of two main components, a generator neural operator and a discriminator neural functional.
We empirically study GANOs in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN.
- Score: 59.21759531471597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the generative adversarial neural operator (GANO), a generative
model paradigm for learning probabilities on infinite-dimensional function
spaces. The natural sciences and engineering are known to have many types of
data that are sampled from infinite-dimensional function spaces, where
classical finite-dimensional deep generative adversarial networks (GANs) may
not be directly applicable. GANO generalizes the GAN framework and allows for
the sampling of functions by learning push-forward operator maps in
infinite-dimensional spaces. GANO consists of two main components, a generator
neural operator and a discriminator neural functional. The inputs to the
generator are samples of functions from a user-specified probability measure,
e.g., Gaussian random field (GRF), and the generator outputs are synthetic data
functions. The input to the discriminator is either a real or synthetic data
function. In this work, we instantiate GANO using the Wasserstein criterion and
show how the Wasserstein loss can be computed in infinite-dimensional spaces.
We empirically study GANOs in controlled cases where both input and output
functions are samples from GRFs and compare its performance to the
finite-dimensional counterpart GAN. We empirically study the efficacy of GANO
on real-world function data of volcanic activities and show its superior
performance over GAN. Furthermore, we find that for the function-based data
considered, GANOs are more stable to train than GANs and require less
hyperparameter optimization.
Related papers
- Composite Bayesian Optimization In Function Spaces Using NEON -- Neural Epistemic Operator Networks [4.1764890353794994]
NEON is an architecture for generating predictions with uncertainty using a single operator network backbone.
We show that NEON achieves state-of-the-art performance while requiring orders of magnitude less trainable parameters.
arXiv Detail & Related papers (2024-04-03T22:42:37Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Functional Linear Non-Gaussian Acyclic Model for Causal Discovery [7.303542369216906]
We develop a framework to identify causal relationships in brain-effective connectivity tasks involving fMRI and EEG datasets.
We establish theoretical guarantees of the identifiability of the causal relationship among non-Gaussian random vectors and even random functions in infinite-dimensional Hilbert spaces.
For real data, we focus on analyzing the brain connectivity patterns derived from fMRI data.
arXiv Detail & Related papers (2024-01-17T23:27:48Z) - D2NO: Efficient Handling of Heterogeneous Input Function Spaces with
Distributed Deep Neural Operators [7.119066725173193]
We propose a novel distributed approach to deal with input functions that exhibit heterogeneous properties.
A central neural network is used to handle shared information across all output functions.
We demonstrate that the corresponding neural network is a universal approximator of continuous nonlinear operators.
arXiv Detail & Related papers (2023-10-29T03:29:59Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.