Quadric hypersurface intersection for manifold learning in feature space
- URL: http://arxiv.org/abs/2102.06186v1
- Date: Thu, 11 Feb 2021 18:52:08 GMT
- Title: Quadric hypersurface intersection for manifold learning in feature space
- Authors: Fedor Pavutnitskiy, Sergei O. Ivanov, Evgeny Abramov, Viacheslav
Borovitskiy, Artem Klochkov, Viktor Vialov, Anatolii Zaikovskii, Aleksandr
Petiushko
- Abstract summary: manifold learning technique suitable for moderately high dimension and large datasets.
The technique is learned from the training data in the form of an intersection of quadric hypersurfaces.
At test time, this manifold can be used to introduce an outlier score for arbitrary new points.
- Score: 52.83976795260532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The knowledge that data lies close to a particular submanifold of the ambient
Euclidean space may be useful in a number of ways. For instance, one may want
to automatically mark any point far away from the submanifold as an outlier, or
to use its geodesic distance to measure similarity between points. Classical
problems for manifold learning are often posed in a very high dimension, e.g.
for spaces of images or spaces of representations of words. Today, with deep
representation learning on the rise in areas such as computer vision and
natural language processing, many problems of this kind may be transformed into
problems of moderately high dimension, typically of the order of hundreds.
Motivated by this, we propose a manifold learning technique suitable for
moderately high dimension and large datasets. The manifold is learned from the
training data in the form of an intersection of quadric hypersurfaces -- simple
but expressive objects. At test time, this manifold can be used to introduce an
outlier score for arbitrary new points and to improve a given similarity metric
by incorporating learned geometric structure into it.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Adversarial Estimation of Topological Dimension with Harmonic Score Maps [7.34158170612151]
We show that it is possible to retrieve the topological dimension of the manifold learned by the score map.
We then introduce a novel method to measure the learned manifold's topological dimension using adversarial attacks.
arXiv Detail & Related papers (2023-12-11T22:29:54Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Alignment and Outer Shell Isotropy for Hyperbolic Graph Contrastive
Learning [69.6810940330906]
We propose a novel contrastive learning framework to learn high-quality graph embedding.
Specifically, we design the alignment metric that effectively captures the hierarchical data-invariant information.
We show that in the hyperbolic space one has to address the leaf- and height-level uniformity which are related to properties of trees.
arXiv Detail & Related papers (2023-10-27T15:31:42Z) - Learning Pose Image Manifolds Using Geometry-Preserving GANs and
Elasticae [13.202747831999414]
Geometric Style-GAN (Geom-SGAN) maps images to low-dimensional latent representations.
Euler's elastica smoothly interpolate between directed points (points + tangent directions) in the low-dimensional latent space.
arXiv Detail & Related papers (2023-05-17T18:45:56Z) - Hyperbolic Geometry in Computer Vision: A Survey [37.76526815020212]
This paper presents the first and most up-to-date literature review of hyperbolic spaces for computer vision applications.
We first introduce the background of hyperbolic geometry, followed by a comprehensive investigation of algorithms, with geometric prior of hyperbolic space, in the context of visual applications.
arXiv Detail & Related papers (2023-04-21T06:22:16Z) - Algebraic Machine Learning with an Application to Chemistry [0.0]
We develop a machine learning pipeline that captures fine-grain geometric information without relying on smoothness assumptions.
In particular, we propose a for numerically detecting points lying near the singular locus of the underlying variety.
arXiv Detail & Related papers (2022-05-11T22:41:19Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - On Equivariant and Invariant Learning of Object Landmark Representations [36.214069685880986]
We develop a simple and effective approach by combining instance-discriminative and spatially-discriminative contrastive learning.
We show that when a deep network is trained to be invariant to geometric and photometric transformations, representations emerge from its intermediate layers that are highly predictive of object landmarks.
arXiv Detail & Related papers (2020-06-26T04:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.