Ensuring Topological Data-Structure Preservation under Autoencoder
Compression due to Latent Space Regularization in Gauss--Legendre nodes
- URL: http://arxiv.org/abs/2309.08228v2
- Date: Thu, 21 Sep 2023 09:10:39 GMT
- Title: Ensuring Topological Data-Structure Preservation under Autoencoder
Compression due to Latent Space Regularization in Gauss--Legendre nodes
- Authors: Chethan Krishnamurthy Ramanaik, Juan-Esteban Suarez Cardona, Anna
Willmann, Pia Hanfeld, Nico Hoffmann and Michael Hecht
- Abstract summary: We prove that regularised autoencoders ensure a one-to-one re-embedding of the initial data manifold to its latent representation.
This observation extends through the classic FashionMNIST dataset up to real world encoding problems for MRI brain scans.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We formulate a data independent latent space regularisation constraint for
general unsupervised autoencoders. The regularisation rests on sampling the
autoencoder Jacobian in Legendre nodes, being the centre of the Gauss-Legendre
quadrature. Revisiting this classic enables to prove that regularised
autoencoders ensure a one-to-one re-embedding of the initial data manifold to
its latent representation. Demonstrations show that prior proposed
regularisation strategies, such as contractive autoencoding, cause topological
defects already for simple examples, and so do convolutional based
(variational) autoencoders. In contrast, topological preservation is ensured
already by standard multilayer perceptron neural networks when being
regularised due to our contribution. This observation extends through the
classic FashionMNIST dataset up to real world encoding problems for MRI brain
scans, suggesting that, across disciplines, reliable low dimensional
representations of complex high-dimensional datasets can be delivered due to
this regularisation technique.
Related papers
- Enhancing anomaly detection with topology-aware autoencoders [0.0]
Autoencoders provide a signal-agnostic approach but are limited by the topology of their latent space.
We construct autoencoders with spherical ($Sn$), product ($S2 otimes S2$), and projective ($mathbbRP2$) latent spaces.
Applying our approach to simulated hadronic top-quark decays, we show that latent spaces with appropriate topological constraints enhance sensitivity and robustness in detecting anomalous events.
arXiv Detail & Related papers (2025-02-14T13:50:46Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Matrix Completion-Informed Deep Unfolded Equilibrium Models for
Self-Supervised k-Space Interpolation in MRI [8.33626757808923]
Regularization model-driven deep learning (DL) has gained significant attention due to its ability to leverage the potent representational capabilities of DL.
We propose a self-supervised DL approach for accelerated MRI that is theoretically guaranteed and does not rely on fully sampled labels.
arXiv Detail & Related papers (2023-09-24T07:25:06Z) - Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains [7.936841911281107]
We propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat'ern kernels and conjugate variational inference (CVI)
We are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods.
arXiv Detail & Related papers (2023-06-01T16:31:36Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - Intrinsic dimension estimation for discrete metrics [65.5438227932088]
In this letter we introduce an algorithm to infer the intrinsic dimension (ID) of datasets embedded in discrete spaces.
We demonstrate its accuracy on benchmark datasets, and we apply it to analyze a metagenomic dataset for species fingerprinting.
This suggests that evolutive pressure acts on a low-dimensional manifold despite the high-dimensionality of sequences' space.
arXiv Detail & Related papers (2022-07-20T06:38:36Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Latent Event-Predictive Encodings through Counterfactual Regularization [0.9449650062296823]
We introduce a SUrprise-GAted Recurrent neural network (SUGAR) using a novel form of counterfactual regularization.
We test the model on a hierarchical sequence prediction task, where sequences are generated by alternating hidden graph structures.
arXiv Detail & Related papers (2021-05-12T18:30:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.