Theoretical Connection between Locally Linear Embedding, Factor
Analysis, and Probabilistic PCA
- URL: http://arxiv.org/abs/2203.13911v1
- Date: Fri, 25 Mar 2022 21:07:20 GMT
- Title: Theoretical Connection between Locally Linear Embedding, Factor
Analysis, and Probabilistic PCA
- Authors: Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley
- Abstract summary: Linear Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
In this work, we look at the linear reconstruction step from a perspective where it is assumed that every data point is conditioned on its linear reconstruction weights as latent factors.
- Score: 13.753161236029328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Locally Linear Embedding (LLE) is a nonlinear spectral dimensionality
reduction and manifold learning method. It has two main steps which are linear
reconstruction and linear embedding of points in the input space and embedding
space, respectively. In this work, we look at the linear reconstruction step
from a stochastic perspective where it is assumed that every data point is
conditioned on its linear reconstruction weights as latent factors. The
stochastic linear reconstruction of LLE is solved using expectation
maximization. We show that there is a theoretical connection between three
fundamental dimensionality reduction methods, i.e., LLE, factor analysis, and
probabilistic Principal Component Analysis (PCA). The stochastic linear
reconstruction of LLE is formulated similar to the factor analysis and
probabilistic PCA. It is also explained why factor analysis and probabilistic
PCA are linear and LLE is a nonlinear method. This work combines and makes a
bridge between two broad approaches of dimensionality reduction, i.e., the
spectral and probabilistic algorithms.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - Mixture of partially linear experts [0.0]
We propose a partially linear structure that incorporates unspecified functions to capture nonlinear relationships.
We establish the identifiability of the proposed model under mild conditions and introduce a practical estimation algorithm.
arXiv Detail & Related papers (2024-05-05T12:10:37Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - $σ$-PCA: a building block for neural learning of identifiable linear transformations [0.0]
$sigma$-PCA is a method that formulates a unified model for linear and nonlinear PCA.
nonlinear PCA can be seen as a method that maximizes both variance and statistical independence.
arXiv Detail & Related papers (2023-11-22T18:34:49Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - A Bayesian Perspective for Determinant Minimization Based Robust
Structured Matrix Factorizatio [10.355894890759377]
We introduce a Bayesian perspective for the structured matrix factorization problem.
We show that the corresponding maximum a posteriori estimation problem boils down to the robust determinant approach for structured matrix factorization.
arXiv Detail & Related papers (2023-02-16T16:48:41Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Generative Locally Linear Embedding [5.967999555890417]
Linear Locally Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
We propose two novel generative versions of LLE, named Generative LLE (GLLE)
Our simulations show that the proposed GLLE methods work effectively in unfolding and generating submanifolds of data.
arXiv Detail & Related papers (2021-04-04T02:59:39Z) - Piecewise linear regression and classification [0.20305676256390928]
This paper proposes a method for solving multivariate regression and classification problems using piecewise linear predictors.
A Python implementation of the algorithm described in this paper is available at http://cse.lab.imtlucca.it/bemporad/parc.
arXiv Detail & Related papers (2021-03-10T17:07:57Z) - Eigendecomposition-Free Training of Deep Networks for Linear
Least-Square Problems [107.3868459697569]
We introduce an eigendecomposition-free approach to training a deep network.
We show that our approach is much more robust than explicit differentiation of the eigendecomposition.
Our method has better convergence properties and yields state-of-the-art results.
arXiv Detail & Related papers (2020-04-15T04:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.