Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints
- URL: http://arxiv.org/abs/2408.16138v1
- Date: Wed, 28 Aug 2024 20:56:35 GMT
- Title: Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints
- Authors: George A. Kevrekidis, Mauro Maggioni, Soledad Villar, Yannis G. Kevrekidis,
- Abstract summary: We show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets.
We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm.
- Score: 9.380902608139902
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conformal Autoencoders are a neural network architecture that imposes orthogonality conditions between the gradients of latent variables towards achieving disentangled representations of data. In this letter we show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets (locally characterized by the dimension of their tangent space), while simultaneously computing encoding and decoding (embedding) maps. We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm. The method is applied to standard data sets and we highlight its applicability, advantages, and shortcomings. In addition, we demonstrate that the same computational technology can be used to build coordinate invariance to local group actions when defined only on a (reduced) submanifold of the embedding space.
Related papers
- Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Relative representations enable zero-shot latent space communication [19.144630518400604]
Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations.
We show how neural architectures can leverage these relative representations to guarantee, in practice, latent isometry invariance.
arXiv Detail & Related papers (2022-09-30T12:37:03Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Autoencoder Image Interpolation by Shaping the Latent Space [12.482988592988868]
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
arXiv Detail & Related papers (2020-08-04T12:32:54Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - LOCA: LOcal Conformal Autoencoder for standardized data coordinates [6.608924227377152]
We present a method for learning an embedding in $mathbbRd$ that is isometric to the latent variables of the manifold.
Our embedding is obtained using a LOcal Conformal Autoencoder (LOCA), an algorithm that constructs an embedding to rectify deformations.
We also apply LOCA to single-site Wi-Fi localization data, and to $3$-dimensional curved surface estimation.
arXiv Detail & Related papers (2020-04-15T17:49:37Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.