Semi-Supervised Manifold Learning with Complexity Decoupled Chart
Autoencoders
- URL: http://arxiv.org/abs/2208.10570v1
- Date: Mon, 22 Aug 2022 19:58:03 GMT
- Title: Semi-Supervised Manifold Learning with Complexity Decoupled Chart
Autoencoders
- Authors: Stefan C. Schonsheck, Scott Mahan, Timo Klock, Alexander Cloninger,
Rongjie Lai
- Abstract summary: This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the theoretical approximation power of such networks that essentially depends on the intrinsic dimension of the data manifold.
- Score: 65.2511270059236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autoencoding is a popular method in representation learning. Conventional
autoencoders employ symmetric encoding-decoding procedures and a simple
Euclidean latent space to detect hidden low-dimensional structures in an
unsupervised way. This work introduces a chart autoencoder with an asymmetric
encoding-decoding process that can incorporate additional semi-supervised
information such as class labels. Besides enhancing the capability for handling
data with complicated topological and geometric structures, these models can
successfully differentiate nearby but disjoint manifolds and intersecting
manifolds with only a small amount of supervision. Moreover, this model only
requires a low complexity encoder, such as local linear projection. We discuss
the theoretical approximation power of such networks that essentially depends
on the intrinsic dimension of the data manifold and not the dimension of the
observations. Our numerical experiments on synthetic and real-world data verify
that the proposed model can effectively manage data with multi-class nearby but
disjoint manifolds of different classes, overlapping manifolds, and manifolds
with non-trivial topology.
Related papers
- On-Manifold Projected Gradient Descent [0.0]
This work provides a computable, direct, and mathematically rigorous approximation to the differential geometry of class manifold for high-dimensional data.
Tools are applied to the setting of neural network image classifiers, where we generate novel, on-manifold data samples.
arXiv Detail & Related papers (2023-08-23T17:50:50Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Convergent autoencoder approximation of low bending and low distortion
manifold embeddings [5.5711773076846365]
We propose and analyze a novel regularization for learning the encoder component of an autoencoder.
The loss functional is computed via Monte Carlo integration with different sampling strategies for pairs of points on the input manifold.
Our main theorem identifies a loss functional of the embedding map as the $Gamma$-limit of the sampling-dependent loss functionals.
arXiv Detail & Related papers (2022-08-22T10:31:31Z) - NOMAD: Nonlinear Manifold Decoders for Operator Learning [17.812064311297117]
Supervised learning in function spaces is an emerging area of machine learning research.
We show NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces.
arXiv Detail & Related papers (2022-06-07T19:52:44Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Extendable and invertible manifold learning with geometry regularized
autoencoders [9.742277703732187]
A fundamental task in data exploration is to extract simplified low dimensional representations that capture intrinsic geometry in data.
Common approaches to this task use kernel methods for manifold learning.
We present a new method for integrating both approaches by incorporating a geometric regularization term in the bottleneck of the autoencoder.
arXiv Detail & Related papers (2020-07-14T15:59:10Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - A Tailored Convolutional Neural Network for Nonlinear Manifold Learning
of Computational Physics Data using Unstructured Spatial Discretizations [0.0]
We propose a nonlinear manifold learning technique based on deep convolutional autoencoders.
The technique is appropriate for model order reduction of physical systems in complex geometries.
arXiv Detail & Related papers (2020-06-11T02:19:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.