Disentangling Generative Factors of Physical Fields Using Variational
Autoencoders
- URL: http://arxiv.org/abs/2109.07399v1
- Date: Wed, 15 Sep 2021 16:02:43 GMT
- Title: Disentangling Generative Factors of Physical Fields Using Variational
Autoencoders
- Authors: Christian Jacobsen and Karthik Duraisamy
- Abstract summary: This work explores the use of variational autoencoders (VAEs) for non-linear dimension reduction.
A disentangled decomposition is interpretable and can be transferred to a variety of tasks including generative modeling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to extract generative parameters from high-dimensional fields of
data in an unsupervised manner is a highly desirable yet unrealized goal in
computational physics. This work explores the use of variational autoencoders
(VAEs) for non-linear dimension reduction with the aim of disentangling the
low-dimensional latent variables to identify independent physical parameters
that generated the data. A disentangled decomposition is interpretable and can
be transferred to a variety of tasks including generative modeling, design
optimization, and probabilistic reduced order modelling. A major emphasis of
this work is to characterize disentanglement using VAEs while minimally
modifying the classic VAE loss function (i.e. the ELBO) to maintain high
reconstruction accuracy. Disentanglement is shown to be highly sensitive to
rotations of the latent space, hyperparameters, random initializations and the
learning schedule. The loss landscape is characterized by over-regularized
local minima which surrounds desirable solutions. We illustrate comparisons
between disentangled and entangled representations by juxtaposing learned
latent distributions and the 'true' generative factors in a model porous flow
problem. Implementing hierarchical priors (HP) is shown to better facilitate
the learning of disentangled representations over the classic VAE. The choice
of the prior distribution is shown to have a dramatic effect on
disentanglement. In particular, the regularization loss is unaffected by latent
rotation when training with rotationally-invariant priors, and thus learning
non-rotationally-invariant priors aids greatly in capturing the properties of
generative factors, improving disentanglement. Some issues inherent to training
VAEs, such as the convergence to over-regularized local minima are illustrated
and investigated, and potential techniques for mitigation are presented.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Revisiting Essential and Nonessential Settings of Evidential Deep Learning [70.82728812001807]
Evidential Deep Learning (EDL) is an emerging method for uncertainty estimation.
We propose Re-EDL, a simplified yet more effective variant of EDL.
arXiv Detail & Related papers (2024-10-01T04:27:07Z) - Minimizing Energy Costs in Deep Learning Model Training: The Gaussian Sampling Approach [11.878350833222711]
We propose a method called em GradSamp for sampling gradient updates from a Gaussian distribution.
em GradSamp not only streamlines gradient but also enables skipping entire epochs, thereby enhancing overall efficiency.
We rigorously validate our hypothesis across a diverse set of standard and non-standard CNN and transformer-based models.
arXiv Detail & Related papers (2024-06-11T15:01:20Z) - How to train your VAE [0.0]
Variational Autoencoders (VAEs) have become a cornerstone in generative modeling and representation learning within machine learning.
This paper explores interpreting the Kullback-Leibler (KL) Divergence, a critical component within the Evidence Lower Bound (ELBO)
The proposed method redefines the ELBO with a mixture of Gaussians for the posterior probability, introduces a regularization term, and employs a PatchGAN discriminator to enhance texture realism.
arXiv Detail & Related papers (2023-09-22T19:52:28Z) - Score-based Causal Representation Learning with Interventions [54.735484409244386]
This paper studies the causal representation learning problem when latent causal variables are observed indirectly.
The objectives are: (i) recovering the unknown linear transformation (up to scaling) and (ii) determining the directed acyclic graph (DAG) underlying the latent variables.
arXiv Detail & Related papers (2023-01-19T18:39:48Z) - Posterior Collapse and Latent Variable Non-identifiability [54.842098835445]
We propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility.
Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
arXiv Detail & Related papers (2023-01-02T06:16:56Z) - RENs: Relevance Encoding Networks [0.0]
This paper proposes relevance encoding networks (RENs): a novel probabilistic VAE-based framework that uses the automatic relevance determination (ARD) prior in the latent space to learn the data-specific bottleneck dimensionality.
We show that the proposed model learns the relevant latent bottleneck dimensionality without compromising the representation and generation quality of the samples.
arXiv Detail & Related papers (2022-05-25T21:53:48Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Neural Decomposition: Functional ANOVA with Variational Autoencoders [9.51828574518325]
Variational Autoencoders (VAEs) have become a popular approach for dimensionality reduction.
Due to the black-box nature of VAEs, their utility for healthcare and genomics applications has been limited.
We focus on characterising the sources of variation in Conditional VAEs.
arXiv Detail & Related papers (2020-06-25T10:29:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.