JL-lemma derived Optimal Projections for Discriminative Dictionary
Learning
- URL: http://arxiv.org/abs/2308.13991v2
- Date: Tue, 12 Sep 2023 09:22:04 GMT
- Title: JL-lemma derived Optimal Projections for Discriminative Dictionary
Learning
- Authors: G.Madhuri, Atul Negi, Kaluri V.Rangarao
- Abstract summary: This paper uses the Johnson-Lindenstrauss (JL) Lemma to select the dimensionality of a transformed space in which a discriminative dictionary can be learned for signal classification.
Unlike state-of-the-art dimensionality reduction-based dictionary learning methods, a projection transformation matrix derived in a single step from M-SPCA provides maximum feature-label consistency.
Experimentation on OCR and face recognition datasets shows relatively better classification performance than other supervised dictionary learning algorithms.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To overcome difficulties in classifying large dimensionality data with a
large number of classes, we propose a novel approach called JLSPCADL. This
paper uses the Johnson-Lindenstrauss (JL) Lemma to select the dimensionality of
a transformed space in which a discriminative dictionary can be learned for
signal classification. Rather than reducing dimensionality via random
projections, as is often done with JL, we use a projection transformation
matrix derived from Modified Supervised PC Analysis (M-SPCA) with the
JL-prescribed dimension.
JLSPCADL provides a heuristic to deduce suitable distortion levels and the
corresponding Suitable Description Length (SDL) of dictionary atoms to derive
an optimal feature space and thus the SDL of dictionary atoms for better
classification. Unlike state-of-the-art dimensionality reduction-based
dictionary learning methods, a projection transformation matrix derived in a
single step from M-SPCA provides maximum feature-label consistency of the
transformed space while preserving the cluster structure of the original data.
Despite confusing pairs, the dictionary for the transformed space generates
discriminative sparse coefficients, with fewer training samples.
Experimentation demonstrates that JLSPCADL scales well with an increasing
number of classes and dimensionality. Improved label consistency of features
due to M-SPCA helps to classify better. Further, the complexity of training a
discriminative dictionary is significantly reduced by using SDL.
Experimentation on OCR and face recognition datasets shows relatively better
classification performance than other supervised dictionary learning
algorithms.
Related papers
- Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Sharp-SSL: Selective high-dimensional axis-aligned random projections
for semi-supervised learning [16.673022545571566]
We propose a new method for high-dimensional semi-supervised learning problems.
It is based on the careful aggregation of the results of a low-dimensional procedure applied to many axis-aligned random projections of the data.
arXiv Detail & Related papers (2023-04-18T17:49:02Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Random Matrix Based Extended Target Tracking with Orientation: A New
Model and Inference [0.0]
We propose a novel extended target tracking algorithm which is capable of representing the extent of dynamic objects as an ellipsoid with a time-varying orientation angle.
A diagonal positive semi-definite matrix is defined to model objects' extent within the random matrix framework.
It is not possible to find a closed-form analytical expression for the true posterior because of the absence of conjugacy.
arXiv Detail & Related papers (2020-10-17T16:33:06Z) - Tangent Space Based Alternating Projections for Nonnegative Low Rank
Matrix Approximation [22.96292865984433]
In the nonnegative low rank matrix approximation method, the projection onto the manifold of fixed rank matrices can be expensive as the singular value decomposition is required.
We propose to use the tangent space of the point in the manifold to approximate the projection onto the manifold in order to reduce the computational cost.
arXiv Detail & Related papers (2020-09-02T05:25:16Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z) - Improved Subsampled Randomized Hadamard Transform for Linear SVM [18.52747917850984]
We propose importance sampling and deterministic top-$r$ sampling to produce effective low-dimensional embedding instead of uniform sampling SRHT.
Our experimental results have demonstrated that our proposed methods can achieve higher classification accuracies than SRHT and other random projection methods on six real-life datasets.
arXiv Detail & Related papers (2020-02-05T04:09:23Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.