LOCA: LOcal Conformal Autoencoder for standardized data coordinates
- URL: http://arxiv.org/abs/2004.07234v2
- Date: Thu, 14 Jan 2021 14:10:49 GMT
- Title: LOCA: LOcal Conformal Autoencoder for standardized data coordinates
- Authors: Erez Peterfreund, Ofir Lindenbaum, Felix Dietrich, Tom Bertalan, Matan
Gavish, Ioannis G. Kevrekidis, Ronald R. Coifman
- Abstract summary: We present a method for learning an embedding in $mathbbRd$ that is isometric to the latent variables of the manifold.
Our embedding is obtained using a LOcal Conformal Autoencoder (LOCA), an algorithm that constructs an embedding to rectify deformations.
We also apply LOCA to single-site Wi-Fi localization data, and to $3$-dimensional curved surface estimation.
- Score: 6.608924227377152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a deep-learning based method for obtaining standardized data
coordinates from scientific measurements.Data observations are modeled as
samples from an unknown, non-linear deformation of an underlying Riemannian
manifold, which is parametrized by a few normalized latent variables. By
leveraging a repeated measurement sampling strategy, we present a method for
learning an embedding in $\mathbb{R}^d$ that is isometric to the latent
variables of the manifold. These data coordinates, being invariant under smooth
changes of variables, enable matching between different instrumental
observations of the same phenomenon. Our embedding is obtained using a LOcal
Conformal Autoencoder (LOCA), an algorithm that constructs an embedding to
rectify deformations by using a local z-scoring procedure while preserving
relevant geometric information. We demonstrate the isometric embedding
properties of LOCA on various model settings and observe that it exhibits
promising interpolation and extrapolation capabilities. Finally, we apply LOCA
to single-site Wi-Fi localization data, and to $3$-dimensional curved surface
estimation based on a $2$-dimensional projection.
Related papers
- STREAM: A Universal State-Space Model for Sparse Geometric Data [2.9483719973596303]
Handling unstructured geometric data, such as point clouds or event-based vision, is a pressing challenge in the field of machine vision.
We propose to encode geometric structure explicitly into the parameterization of a state-space model.
Our model deploys the Mamba selective state-space model with a modified kernel to efficiently map sparse data to modern hardware.
arXiv Detail & Related papers (2024-11-19T16:06:32Z) - Thinner Latent Spaces: Detecting dimension and imposing invariance through autoencoder gradient constraints [9.380902608139902]
We show that orthogonality relations within the latent layer of the network can be leveraged to infer the intrinsic dimensionality of nonlinear manifold data sets.
We outline the relevant theory relying on differential geometry, and describe the corresponding gradient-descent optimization algorithm.
arXiv Detail & Related papers (2024-08-28T20:56:35Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z) - Uniform Interpolation Constrained Geodesic Learning on Data Manifold [28.509561636926414]
Along the learned geodesic, our method can generate high-qualitys between two given data samples.
We provide a theoretical analysis of our model and use image translation as an example to demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-02-12T07:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.