Generative Locally Linear Embedding
- URL: http://arxiv.org/abs/2104.01525v1
- Date: Sun, 4 Apr 2021 02:59:39 GMT
- Title: Generative Locally Linear Embedding
- Authors: Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley
- Abstract summary: Linear Locally Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
We propose two novel generative versions of LLE, named Generative LLE (GLLE)
Our simulations show that the proposed GLLE methods work effectively in unfolding and generating submanifolds of data.
- Score: 5.967999555890417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Locally Linear Embedding (LLE) is a nonlinear spectral dimensionality
reduction and manifold learning method. It has two main steps which are linear
reconstruction and linear embedding of points in the input space and embedding
space, respectively. In this work, we propose two novel generative versions of
LLE, named Generative LLE (GLLE), whose linear reconstruction steps are
stochastic rather than deterministic. GLLE assumes that every data point is
caused by its linear reconstruction weights as latent factors. The proposed
GLLE algorithms can generate various LLE embeddings stochastically while all
the generated embeddings relate to the original LLE embedding. We propose two
versions for stochastic linear reconstruction, one using expectation
maximization and another with direct sampling from a derived distribution by
optimization. The proposed GLLE methods are closely related to and inspired by
variational inference, factor analysis, and probabilistic principal component
analysis. Our simulations show that the proposed GLLE methods work effectively
in unfolding and generating submanifolds of data.
Related papers
- RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-identification [59.5042031913258]
Non-linear modality discrepancy mainly comes from diverse linear transformations acting on the surface of different materials.
We propose a Random Linear Enhancement (RLE) strategy which includes Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE)
The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification.
arXiv Detail & Related papers (2024-11-02T12:13:37Z) - Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Theoretical Connection between Locally Linear Embedding, Factor
Analysis, and Probabilistic PCA [13.753161236029328]
Linear Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
In this work, we look at the linear reconstruction step from a perspective where it is assumed that every data point is conditioned on its linear reconstruction weights as latent factors.
arXiv Detail & Related papers (2022-03-25T21:07:20Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - DAGs with No Curl: An Efficient DAG Structure Learning Approach [62.885572432958504]
Recently directed acyclic graph (DAG) structure learning is formulated as a constrained continuous optimization problem with continuous acyclicity constraints.
We propose a novel learning framework to model and learn the weighted adjacency matrices in the DAG space directly.
We show that our method provides comparable accuracy but better efficiency than baseline DAG structure learning methods on both linear and generalized structural equation models.
arXiv Detail & Related papers (2021-06-14T07:11:36Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Locally Linear Embedding and its Variants: Tutorial and Survey [13.753161236029328]
The idea of Locally Linear Embedding (LLE) is fitting the local structure of manifold in the embedding space.
In this paper, we first cover LLE, kernel LLE, inverse LLE, and feature fusion with LLE.
Then, we introduce fusion of LLE with other manifold learning methods including Isomap (i.e., ISOLLE), principal component analysis, Fisher discriminant analysis, discriminant LLE, and Isotop.
arXiv Detail & Related papers (2020-11-22T03:44:45Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.