Local distance preserving auto-encoders using Continuous k-Nearest
Neighbours graphs
- URL: http://arxiv.org/abs/2206.05909v1
- Date: Mon, 13 Jun 2022 05:38:44 GMT
- Title: Local distance preserving auto-encoders using Continuous k-Nearest
Neighbours graphs
- Authors: Nutan Chen, Patrick van der Smagt, Botond Cseke
- Abstract summary: We introduce several auto-encoder models that preserve local distances when mapping from the data space to the latent space.
Our method provides state-of-the-art performance across several standard datasets and evaluation metrics.
- Score: 12.607603625414573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Auto-encoder models that preserve similarities in the data are a popular tool
in representation learning. In this paper we introduce several auto-encoder
models that preserve local distances when mapping from the data space to the
latent space. We use a local distance preserving loss that is based on the
continuous k-nearest neighbours graph which is known to capture topological
features at all scales simultaneously. To improve training performance, we
formulate learning as a constraint optimisation problem with local distance
preservation as the main objective and reconstruction accuracy as a constraint.
We generalise this approach to hierarchical variational auto-encoders thus
learning generative models with geometrically consistent latent and data
spaces. Our method provides state-of-the-art performance across several
standard datasets and evaluation metrics.
Related papers
- Remote sensing framework for geological mapping via stacked autoencoders and clustering [0.15833270109954137]
We present an unsupervised machine learning-based framework for processing remote sensing data.
We use Landsat 8, ASTER, and Sentinel-2 datasets to evaluate the framework for geological mapping of the Mutawintji region in Australia.
Our results reveal that the framework produces accurate and interpretable geological maps, efficiently discriminating rock units.
arXiv Detail & Related papers (2024-04-02T09:15:32Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Are We Using Autoencoders in a Wrong Way? [3.110260251019273]
Autoencoders are used for dimensionality reduction, anomaly detection and feature extraction.
We revisited the standard training for the undercomplete Autoencoder modifying the shape of the latent space.
We also explored the behaviour of the latent space in the case of reconstruction of a random sample from the whole dataset.
arXiv Detail & Related papers (2023-09-04T11:22:43Z) - Reconstructing Spatiotemporal Data with C-VAEs [49.1574468325115]
Conditional continuous representation of moving regions is commonly used.
In this work, we explore the capabilities of Conditional Varitemporal Autoencoder (C-VAE) models to generate realistic representations of regions' evolution.
arXiv Detail & Related papers (2023-07-12T15:34:10Z) - Automated Spatio-Temporal Graph Contrastive Learning [18.245433428868775]
We develop an automated-temporal augmentation scheme with a parameterized contrastive view generator.
AutoST can adapt to the heterogeneous graph with multi-view semantics well preserved.
Experiments for three downstream-temporal mining tasks on several real-world datasets demonstrate the significant performance gain.
arXiv Detail & Related papers (2023-05-06T03:52:33Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Contrastive Neighborhood Alignment [81.65103777329874]
We present Contrastive Neighborhood Alignment (CNA), a manifold learning approach to maintain the topology of learned features.
The target model aims to mimic the local structure of the source representation space using a contrastive loss.
CNA is illustrated in three scenarios: manifold learning, where the model maintains the local topology of the original data in a dimension-reduced space; model distillation, where a small student model is trained to mimic a larger teacher; and legacy model update, where an older model is replaced by a more powerful one.
arXiv Detail & Related papers (2022-01-06T04:58:31Z) - A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces [7.717214217542406]
We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
arXiv Detail & Related papers (2021-05-25T20:44:28Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.