Encoded Prior Sliced Wasserstein AutoEncoder for learning latent
manifold representations
- URL: http://arxiv.org/abs/2010.01037v2
- Date: Fri, 10 Dec 2021 20:40:02 GMT
- Title: Encoded Prior Sliced Wasserstein AutoEncoder for learning latent
manifold representations
- Authors: Sanjukta Krishnagopal and Jacob Bedrossian
- Abstract summary: We introduce an Encoded Prior Sliced Wasserstein AutoEncoder.
An additional prior-encoder network learns an embedding of the data manifold.
We show that the prior encodes the geometry underlying the data unlike conventional autoencoders.
- Score: 0.7614628596146599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While variational autoencoders have been successful in several tasks, the use
of conventional priors are limited in their ability to encode the underlying
structure of input data. We introduce an Encoded Prior Sliced Wasserstein
AutoEncoder wherein an additional prior-encoder network learns an embedding of
the data manifold which preserves topological and geometric properties of the
data, thus improving the structure of latent space. The autoencoder and
prior-encoder networks are iteratively trained using the Sliced Wasserstein
distance. The effectiveness of the learned manifold encoding is explored by
traversing latent space through interpolations along geodesics which generate
samples that lie on the data manifold and hence are more realistic compared to
Euclidean interpolation. To this end, we introduce a graph-based algorithm for
exploring the data manifold and interpolating along network-geodesics in latent
space by maximizing the density of samples along the path while minimizing
total energy. We use the 3D-spiral data to show that the prior encodes the
geometry underlying the data unlike conventional autoencoders, and to
demonstrate the exploration of the embedded data manifold through the network
algorithm. We apply our framework to benchmarked image datasets to demonstrate
the advantages of learning data representations in outlier generation, latent
structure, and geodesic interpolation.
Related papers
- Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - HYVE: Hybrid Vertex Encoder for Neural Distance Fields [9.40036617308303]
We present a neural-network architecture suitable for accurate encoding of 3D shapes in a single forward pass.
Our network is able to output valid signed distance fields without explicit prior knowledge of non-zero distance values or shape occupancy.
arXiv Detail & Related papers (2023-10-10T14:07:37Z) - Information-Ordered Bottlenecks for Adaptive Semantic Compression [0.0]
We present a neural layer designed to adaptively compress data into variables ordered by likelihood.
We show that IOBs achieve near-optimal compression for a given architecture and can assign encoding signals in a manner that is semantically meaningful.
We introduce a novel theory for estimating global dimensionality with IOBs and show that they recover SOTA dimensionality estimates for complex synthetic data.
arXiv Detail & Related papers (2023-05-18T18:00:00Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Overhead-Free Blockage Detection and Precoding Through Physics-Based
Graph Neural Networks: LIDAR Data Meets Ray Tracing [58.73924499067486]
Blockage detection is achieved by classifying light detection and ranging (LIDAR) data through a physics-based graph neural network (GNN)
For precoder design, a preliminary channel estimate is obtained by running ray tracing on a 3D surface obtained from LIDAR data.
Numerical simulations show that blockage detection is successful with 95% accuracy.
arXiv Detail & Related papers (2022-09-15T15:04:55Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - Convergent autoencoder approximation of low bending and low distortion
manifold embeddings [5.5711773076846365]
We propose and analyze a novel regularization for learning the encoder component of an autoencoder.
The loss functional is computed via Monte Carlo integration with different sampling strategies for pairs of points on the input manifold.
Our main theorem identifies a loss functional of the embedding map as the $Gamma$-limit of the sampling-dependent loss functionals.
arXiv Detail & Related papers (2022-08-22T10:31:31Z) - Dataset Condensation with Latent Space Knowledge Factorization and
Sharing [73.31614936678571]
We introduce a novel approach for solving dataset condensation problem by exploiting the regularity in a given dataset.
Instead of condensing the dataset directly in the original input space, we assume a generative process of the dataset with a set of learnable codes.
We experimentally show that our method achieves new state-of-the-art records by significant margins on various benchmark datasets.
arXiv Detail & Related papers (2022-08-21T18:14:08Z) - Toward a Geometrical Understanding of Self-supervised Contrastive
Learning [55.83778629498769]
Self-supervised learning (SSL) is one of the premier techniques to create data representations that are actionable for transfer learning in the absence of human annotations.
Mainstream SSL techniques rely on a specific deep neural network architecture with two cascaded neural networks: the encoder and the projector.
In this paper, we investigate how the strength of the data augmentation policies affects the data embedding.
arXiv Detail & Related papers (2022-05-13T23:24:48Z) - Weakly Supervised Change Detection Using Guided Anisotropic Difusion [97.43170678509478]
We propose original ideas that help us to leverage such datasets in the context of change detection.
First, we propose the guided anisotropic diffusion (GAD) algorithm, which improves semantic segmentation results.
We then show its potential in two weakly-supervised learning strategies tailored for change detection.
arXiv Detail & Related papers (2021-12-31T10:03:47Z) - Homological Time Series Analysis of Sensor Signals from Power Plants [0.0]
We use topological data analysis techniques to construct a suitable neural network classifier for the task of learning sensor signals of entire power plants.
We derive architectures with deep one-dimensional convolutional layers combined with stacked long short-term memories.
For validation, numerical experiments were performed with sensor data from four power plants of the same construction type.
arXiv Detail & Related papers (2021-06-03T10:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.