Yet Another Algorithm for Supervised Principal Component Analysis:
Supervised Linear Centroid-Encoder
- URL: http://arxiv.org/abs/2306.04622v1
- Date: Wed, 7 Jun 2023 17:52:29 GMT
- Title: Yet Another Algorithm for Supervised Principal Component Analysis:
Supervised Linear Centroid-Encoder
- Authors: Tomojit Ghosh, Michael Kirby
- Abstract summary: We propose a new supervised dimensionality reduction technique called Supervised Linear Centroid-Encoder (SLCE)
SLCE works by mapping the samples of a class to its class centroid using a linear transformation.
- Score: 1.2487990897680423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new supervised dimensionality reduction technique called
Supervised Linear Centroid-Encoder (SLCE), a linear counterpart of the
nonlinear Centroid-Encoder (CE) \citep{ghosh2022supervised}. SLCE works by
mapping the samples of a class to its class centroid using a linear
transformation. The transformation is a projection that reconstructs a point
such that its distance from the corresponding class centroid, i.e.,
centroid-reconstruction loss, is minimized in the ambient space. We derive a
closed-form solution using an eigendecomposition of a symmetric matrix. We did
a detailed analysis and presented some crucial mathematical properties of the
proposed approach. %We also provide an iterative solution approach based
solving the optimization problem using a descent method. We establish a
connection between the eigenvalues and the centroid-reconstruction loss. In
contrast to Principal Component Analysis (PCA) which reconstructs a sample in
the ambient space, the transformation of SLCE uses the instances of a class to
rebuild the corresponding class centroid. Therefore the proposed method can be
considered a form of supervised PCA. Experimental results show the performance
advantage of SLCE over other supervised methods.
Related papers
- Robust Multi-Dimensional Scaling via Accelerated Alternating Projections [5.778024594615575]
We consider the robust multi-dimensional (RMDS) problem in this paper.
Inspired by classic MDSA theories, we propose an alternating projection technique.
arXiv Detail & Related papers (2025-01-04T06:28:10Z) - Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Sparse Linear Centroid-Encoder: A Convex Method for Feature Selection [1.057079240576682]
We present Sparse Centroid-Encoder (SLCE) over a novel feature selection technique.
The algorithm uses a linear network to reconstruct a neural feature at the same time.
arXiv Detail & Related papers (2023-06-07T23:07:55Z) - Entropic Wasserstein Component Analysis [8.744017403796406]
A key requirement for Dimension reduction (DR) is to incorporate global dependencies among original and embedded samples.
We combine the principles of optimal transport (OT) and principal component analysis (PCA)
Our method seeks the best linear subspace that minimizes reconstruction error using entropic OT, which naturally encodes the neighborhood information of the samples.
arXiv Detail & Related papers (2023-03-09T08:59:33Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Fair Recommendation by Geometric Interpretation and Analysis of Matrix
Factorization [4.658166900129066]
Matrix factorization-based recommender system is in effect an angle preserving dimensionality reduction technique.
We reformulate the original angle preserving dimensionality reduction problem into a distance preserving dimensionality reduction problem.
We show that the geometric shape of input data of recommender system in its original higher dimension are distributed on co-centric circles with interesting properties.
arXiv Detail & Related papers (2023-01-10T05:30:46Z) - A Manifold Proximal Linear Method for Sparse Spectral Clustering with
Application to Single-Cell RNA Sequencing Data Analysis [9.643152256249884]
This paper considers the widely adopted SSC model as an optimization model with nonsmooth and non objective.
We propose a new method (ManPL) that solves the original SSC problem.
Results of the proposed methods are established.
arXiv Detail & Related papers (2020-07-18T22:05:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.