Linearized Diffusion Map
- URL: http://arxiv.org/abs/2507.14257v1
- Date: Fri, 18 Jul 2025 11:56:41 GMT
- Title: Linearized Diffusion Map
- Authors: Julio Candanedo,
- Abstract summary: We introduce the Linearized Diffusion Map (LDM), a novel linear dimensionality reduction method constructed via a linear approximation of the diffusion-map kernel.<n>Our analysis positions LDM as a valuable new linear dimensionality reduction technique with promising theoretical and practical extensions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the Linearized Diffusion Map (LDM), a novel linear dimensionality reduction method constructed via a linear approximation of the diffusion-map kernel. LDM integrates the geometric intuition of diffusion-based nonlinear methods with the computational simplicity, efficiency, and interpretability inherent in linear embeddings such as PCA and classical MDS. Through comprehensive experiments on synthetic datasets (Swiss roll and hyperspheres) and real-world benchmarks (MNIST and COIL-20), we illustrate that LDM captures distinct geometric features of datasets compared to PCA, offering complementary advantages. Specifically, LDM embeddings outperform PCA in datasets exhibiting explicit manifold structures, particularly in high-dimensional regimes, whereas PCA remains preferable in scenarios dominated by variance or noise. Furthermore, the complete positivity of LDM's kernel matrix allows direct applicability of Non-negative Matrix Factorization (NMF), suggesting opportunities for interpretable latent-structure discovery. Our analysis positions LDM as a valuable new linear dimensionality reduction technique with promising theoretical and practical extensions.
Related papers
- Nonparametric Linear Discriminant Analysis for High Dimensional Matrix-Valued Data [0.0]
We propose a novel extension of Fisher's Linear Discriminant Analysis (LDA) tailored for matrix-valued observations.<n>We adopt a nonparametric empirical Bayes framework based on Non Maximum Likelihood Estimation (NPMLE)<n>Our method is effectively generalized to the matrix setting, thereby improving classification performance.
arXiv Detail & Related papers (2025-07-25T07:30:24Z) - Enforcing Latent Euclidean Geometry in Single-Cell VAEs for Manifold Interpolation [79.27003481818413]
We introduce FlatVI, a training framework that regularises the latent manifold of discrete-likelihood variational autoencoders towards Euclidean geometry.<n>By encouraging straight lines in the latent space to approximate geodesics on the decoded single-cell manifold, FlatVI enhances compatibility with downstream approaches.
arXiv Detail & Related papers (2025-07-15T23:08:14Z) - Accelerating Constrained Sampling: A Large Deviations Approach [11.382163777108385]
This work focuses on the long-time behavior of SRNLMC, where a skew-symmetric matrix is added to RLD.<n>By explicitly characterizing the rate functions, we show that this choice of the skew-symmetric matrix accelerates the convergence to the target distribution.<n>Experiments for SRNLMC based on the proposed skew-symmetric matrix show superior performance.
arXiv Detail & Related papers (2025-06-09T14:44:39Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Unsupervised Learning Discriminative MIG Detectors in Nonhomogeneous
Clutter [0.8984888893275712]
Principal component analysis (PCA) maps high-dimensional data into a lower-dimensional space maximizing the data variance.
Inspired by the principle of PCA, a novel type of learning discriminative matrix information geometry (MIG) detectors are developed.
Three discriminative MIG detectors are illustrated with respect to different geometric measures.
arXiv Detail & Related papers (2022-04-24T13:50:05Z) - Theoretical Connection between Locally Linear Embedding, Factor
Analysis, and Probabilistic PCA [13.753161236029328]
Linear Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
In this work, we look at the linear reconstruction step from a perspective where it is assumed that every data point is conditioned on its linear reconstruction weights as latent factors.
arXiv Detail & Related papers (2022-03-25T21:07:20Z) - Pseudo-Spherical Contrastive Divergence [119.28384561517292]
We propose pseudo-spherical contrastive divergence (PS-CD) to generalize maximum learning likelihood of energy-based models.
PS-CD avoids the intractable partition function and provides a generalized family of learning objectives.
arXiv Detail & Related papers (2021-11-01T09:17:15Z) - Improving Metric Dimensionality Reduction with Distributed Topology [68.8204255655161]
DIPOLE is a dimensionality-reduction post-processing step that corrects an initial embedding by minimizing a loss functional with both a local, metric term and a global, topological term.
We observe that DIPOLE outperforms popular methods like UMAP, t-SNE, and Isomap on a number of popular datasets.
arXiv Detail & Related papers (2021-06-14T17:19:44Z) - Learning Generative Prior with Latent Space Sparsity Constraints [25.213673771175692]
It has been argued that the distribution of natural images do not lie in a single manifold but rather lie in a union of several submanifolds.
We propose a sparsity-driven latent space sampling (SDLSS) framework and develop a proximal meta-learning (PML) algorithm to enforce sparsity in the latent space.
The results demonstrate that for a higher degree of compression, the SDLSS method is more efficient than the state-of-the-art method.
arXiv Detail & Related papers (2021-05-25T14:12:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.