Supervised Manifold Learning via Random Forest Geometry-Preserving
Proximities
- URL: http://arxiv.org/abs/2307.01077v1
- Date: Mon, 3 Jul 2023 14:55:11 GMT
- Title: Supervised Manifold Learning via Random Forest Geometry-Preserving
Proximities
- Authors: Jake S. Rhodes
- Abstract summary: We show the weaknesses of class-conditional manifold learning methods quantitatively and visually.
We propose an alternate choice of kernel for supervised dimensionality reduction using a data-geometry-preserving variant of random forest proximities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Manifold learning approaches seek the intrinsic, low-dimensional data
structure within a high-dimensional space. Mainstream manifold learning
algorithms, such as Isomap, UMAP, $t$-SNE, Diffusion Map, and Laplacian
Eigenmaps do not use data labels and are thus considered unsupervised. Existing
supervised extensions of these methods are limited to classification problems
and fall short of uncovering meaningful embeddings due to their construction
using order non-preserving, class-conditional distances. In this paper, we show
the weaknesses of class-conditional manifold learning quantitatively and
visually and propose an alternate choice of kernel for supervised
dimensionality reduction using a data-geometry-preserving variant of random
forest proximities as an initialization for manifold learning methods. We show
that local structure preservation using these proximities is near universal
across manifold learning approaches and global structure is properly maintained
using diffusion-based algorithms.
Related papers
- Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - CBMAP: Clustering-based manifold approximation and projection for dimensionality reduction [0.0]
Dimensionality reduction methods are employed to decrease data dimensionality.
This study introduces a clustering-based approach, namely CBMAP, for dimensionality reduction.
CBMAP aims to preserve both global and local structures, ensuring that clusters in lower-dimensional spaces closely resemble those in high-dimensional spaces.
arXiv Detail & Related papers (2024-04-27T15:44:21Z) - Scalable manifold learning by uniform landmark sampling and constrained
locally linear embedding [0.6144680854063939]
We propose a scalable manifold learning (scML) method that can manipulate large-scale and high-dimensional data in an efficient manner.
We empirically validated the effectiveness of scML on synthetic datasets and real-world benchmarks of different types.
scML scales well with increasing data sizes and embedding dimensions, and exhibits promising performance in preserving the global structure.
arXiv Detail & Related papers (2024-01-02T08:43:06Z) - A Heat Diffusion Perspective on Geodesic Preserving Dimensionality
Reduction [66.21060114843202]
We propose a more general heat kernel based manifold embedding method that we call heat geodesic embeddings.
Results show that our method outperforms existing state of the art in preserving ground truth manifold distances.
We also showcase our method on single cell RNA-sequencing datasets with both continuum and cluster structure.
arXiv Detail & Related papers (2023-05-30T13:58:50Z) - Intrinsic dimension estimation for discrete metrics [65.5438227932088]
In this letter we introduce an algorithm to infer the intrinsic dimension (ID) of datasets embedded in discrete spaces.
We demonstrate its accuracy on benchmark datasets, and we apply it to analyze a metagenomic dataset for species fingerprinting.
This suggests that evolutive pressure acts on a low-dimensional manifold despite the high-dimensionality of sequences' space.
arXiv Detail & Related papers (2022-07-20T06:38:36Z) - Genetic Programming for Manifold Learning: Preserving Local Topology [5.226724669049025]
We propose a new approach to using genetic programming for manifold learning, which preserves local topology.
This is expected to significantly improve performance on tasks where local neighbourhood structure (topology) is paramount.
arXiv Detail & Related papers (2021-08-23T03:48:48Z) - Quadric hypersurface intersection for manifold learning in feature space [52.83976795260532]
manifold learning technique suitable for moderately high dimension and large datasets.
The technique is learned from the training data in the form of an intersection of quadric hypersurfaces.
At test time, this manifold can be used to introduce an outlier score for arbitrary new points.
arXiv Detail & Related papers (2021-02-11T18:52:08Z) - Functorial Manifold Learning [1.14219428942199]
We first characterize manifold learning algorithms as functors that map pseudometric spaces to optimization objectives.
We then use this characterization to prove refinement bounds on manifold learning loss functions and construct a hierarchy of manifold learning algorithms.
We express several popular manifold learning algorithms as functors at different levels of this hierarchy, including Metric Multidimensional Scaling, IsoMap, and UMAP.
arXiv Detail & Related papers (2020-11-15T02:30:23Z) - Extendable and invertible manifold learning with geometry regularized
autoencoders [9.742277703732187]
A fundamental task in data exploration is to extract simplified low dimensional representations that capture intrinsic geometry in data.
Common approaches to this task use kernel methods for manifold learning.
We present a new method for integrating both approaches by incorporating a geometric regularization term in the bottleneck of the autoencoder.
arXiv Detail & Related papers (2020-07-14T15:59:10Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Deep Metric Structured Learning For Facial Expression Recognition [58.7528672474537]
We propose a deep metric learning model to create embedded sub-spaces with a well defined structure.
A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces.
We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
arXiv Detail & Related papers (2020-01-18T06:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.