Locally Linear Embedding and its Variants: Tutorial and Survey
- URL: http://arxiv.org/abs/2011.10925v1
- Date: Sun, 22 Nov 2020 03:44:45 GMT
- Title: Locally Linear Embedding and its Variants: Tutorial and Survey
- Authors: Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley
- Abstract summary: The idea of Locally Linear Embedding (LLE) is fitting the local structure of manifold in the embedding space.
In this paper, we first cover LLE, kernel LLE, inverse LLE, and feature fusion with LLE.
Then, we introduce fusion of LLE with other manifold learning methods including Isomap (i.e., ISOLLE), principal component analysis, Fisher discriminant analysis, discriminant LLE, and Isotop.
- Score: 13.753161236029328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is a tutorial and survey paper for Locally Linear Embedding (LLE) and
its variants. The idea of LLE is fitting the local structure of manifold in the
embedding space. In this paper, we first cover LLE, kernel LLE, inverse LLE,
and feature fusion with LLE. Then, we cover out-of-sample embedding using
linear reconstruction, eigenfunctions, and kernel mapping. Incremental LLE is
explained for embedding streaming data. Landmark LLE methods using the Nystrom
approximation and locally linear landmarks are explained for big data
embedding. We introduce the methods for parameter selection of number of
neighbors using residual variance, Procrustes statistics, preservation
neighborhood error, and local neighborhood selection. Afterwards, Supervised
LLE (SLLE), enhanced SLLE, SLLE projection, probabilistic SLLE, supervised
guided LLE (using Hilbert-Schmidt independence criterion), and semi-supervised
LLE are explained for supervised and semi-supervised embedding. Robust LLE
methods using least squares problem and penalty functions are also introduced
for embedding in the presence of outliers and noise. Then, we introduce fusion
of LLE with other manifold learning methods including Isomap (i.e., ISOLLE),
principal component analysis, Fisher discriminant analysis, discriminant LLE,
and Isotop. Finally, we explain weighted LLE in which the distances,
reconstruction weights, or the embeddings are adjusted for better embedding; we
cover weighted LLE for deformed distributed data, weighted LLE using
probability of occurrence, SLLE by adjusting weights, modified LLE, and
iterative LLE.
Related papers
- A Bayesian Approach Toward Robust Multidimensional Ellipsoid-Specific Fitting [0.0]
This work presents a novel and effective method for fitting multidimensional ellipsoids to scattered data in the contamination of noise and outliers.
We incorporate a uniform prior distribution to constrain the search for primitive parameters within an ellipsoidal domain.
We apply it to a wide range of practical applications such as microscopy cell counting, 3D reconstruction, geometric shape approximation, and magnetometer calibration tasks.
arXiv Detail & Related papers (2024-07-27T14:31:51Z) - Conformal inference for regression on Riemannian Manifolds [49.7719149179179]
We investigate prediction sets for regression scenarios when the response variable, denoted by $Y$, resides in a manifold, and the covariable, denoted by X, lies in Euclidean space.
We prove the almost sure convergence of the empirical version of these regions on the manifold to their population counterparts.
arXiv Detail & Related papers (2023-10-12T10:56:25Z) - Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
KKT Conditions for Margin Maximization [59.038366742773164]
Linears and leaky ReLU trained by gradient flow on logistic loss have an implicit bias towards satisfying the Karush-KuTucker (KKT) conditions.
In this work we establish a number of settings where the satisfaction of these conditions implies benign overfitting in linear classifiers and in two-layer leaky ReLU networks.
arXiv Detail & Related papers (2023-03-02T18:24:26Z) - Generating detailed saliency maps using model-agnostic methods [0.0]
We focus on a model-agnostic explainability method called RISE, elaborate on observed shortcomings of its grid-based approach.
modifications, collectively called VRISE (Voronoi-RISE), are meant to, respectively, improve the accuracy of maps generated using large occlusions.
We compare accuracy of saliency maps produced by VRISE and RISE on the validation split of ILSVRC2012, using a saliency-guided content insertion/deletion metric and a localization metric based on bounding boxes.
arXiv Detail & Related papers (2022-09-04T21:34:46Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Generative Locally Linear Embedding [5.967999555890417]
Linear Locally Embedding (LLE) is a nonlinear spectral dimensionality reduction and manifold learning method.
We propose two novel generative versions of LLE, named Generative LLE (GLLE)
Our simulations show that the proposed GLLE methods work effectively in unfolding and generating submanifolds of data.
arXiv Detail & Related papers (2021-04-04T02:59:39Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Rethinking Content and Style: Exploring Bias for Unsupervised
Disentanglement [59.033559925639075]
We propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction.
The corresponding model inductive bias is introduced by our proposed C-S disentanglement Module (C-S DisMo)
Experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement.
arXiv Detail & Related papers (2021-02-21T08:04:33Z) - Factor Analysis, Probabilistic Principal Component Analysis, Variational
Inference, and Variational Autoencoder: Tutorial and Survey [5.967999555890417]
This tutorial and survey paper on factor analysis, probabilistic Principal Component Analysis (PCA), variational inference, and Variational Autoencoder (VAE)
They asssume that every data point is generated from or caused by a low-dimensional latent factor.
For their inference and generative behaviour, these models can also be used for generation of new data points in the data space.
arXiv Detail & Related papers (2021-01-04T01:29:09Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.