CAMEL: Curvature-Augmented Manifold Embedding and Learning
- URL: http://arxiv.org/abs/2303.02561v2
- Date: Tue, 16 Jan 2024 17:06:42 GMT
- Title: CAMEL: Curvature-Augmented Manifold Embedding and Learning
- Authors: Nan Xu, Yongming Liu
- Abstract summary: Curvature-Augmented Manifold Embedding and Learning (CAMEL) is proposed for high dimensional data classification, dimension reduction, and visualization.
CAMEL has been evaluated on various benchmark datasets and has shown to outperform state-of-the-art methods.
- Score: 21.945022912830044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A novel method, named Curvature-Augmented Manifold Embedding and Learning
(CAMEL), is proposed for high dimensional data classification, dimension
reduction, and visualization. CAMEL utilizes a topology metric defined on the
Riemannian manifold, and a unique Riemannian metric for both distance and
curvature to enhance its expressibility. The method also employs a smooth
partition of unity operator on the Riemannian manifold to convert localized
orthogonal projection to global embedding, which captures both the overall
topological structure and local similarity simultaneously. The local orthogonal
vectors provide a physical interpretation of the significant characteristics of
clusters. Therefore, CAMEL not only provides a low-dimensional embedding but
also interprets the physics behind this embedding. CAMEL has been evaluated on
various benchmark datasets and has shown to outperform state-of-the-art
methods, especially for high-dimensional datasets. The method's distinct
benefits are its high expressibility, interpretability, and scalability. The
paper provides a detailed discussion on Riemannian distance and curvature
metrics, physical interpretability, hyperparameter effect, manifold stability,
and computational efficiency for a holistic understanding of CAMEL. Finally,
the paper presents the limitations and future work of CAMEL along with key
conclusions.
Related papers
- Curvature Augmented Manifold Embedding and Learning [9.195829534223982]
A new dimensional reduction (DR) and data visualization method, Curvature-Augmented Manifold Embedding and Learning (CAMEL), is proposed.
The key novel contribution is to formulate the DR problem as a mechanistic/physics model.
Compared with many existing attractive-repulsive force-based methods, one unique contribution of the proposed method is to include a non-pairwise force.
arXiv Detail & Related papers (2024-03-21T19:59:07Z) - Scalable manifold learning by uniform landmark sampling and constrained
locally linear embedding [0.6144680854063939]
We propose a scalable manifold learning (scML) method that can manipulate large-scale and high-dimensional data in an efficient manner.
We empirically validated the effectiveness of scML on synthetic datasets and real-world benchmarks of different types.
scML scales well with increasing data sizes and embedding dimensions, and exhibits promising performance in preserving the global structure.
arXiv Detail & Related papers (2024-01-02T08:43:06Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z) - Laplacian-based Cluster-Contractive t-SNE for High Dimensional Data
Visualization [20.43471678277403]
We propose LaptSNE, a new graph-based dimensionality reduction method based on t-SNE.
Specifically, LaptSNE leverages the eigenvalue information of the graph Laplacian to shrink the potential clusters in the low-dimensional embedding.
We show how to calculate the gradient analytically, which may be of broad interest when considering optimization with Laplacian-composited objective.
arXiv Detail & Related papers (2022-07-25T14:10:24Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - On the minmax regret for statistical manifolds: the role of curvature [68.8204255655161]
Two-part codes and the minimum description length have been successful in delivering procedures to single out the best models.
We derive a sharper expression than the standard one given by the complexity, where the scalar curvature of the Fisher information metric plays a dominant role.
arXiv Detail & Related papers (2020-07-06T17:28:19Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z) - Learning Flat Latent Manifolds with VAEs [16.725880610265378]
We propose an extension to the framework of variational auto-encoders, where the Euclidean metric is a proxy for the similarity between data points.
We replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one.
We evaluate our method on a range of data-sets, including a video-tracking benchmark.
arXiv Detail & Related papers (2020-02-12T09:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.