JL-lemma derived Optimal Projections for Discriminative Dictionary
Learning
- URL: http://arxiv.org/abs/2308.13991v2
- Date: Tue, 12 Sep 2023 09:22:04 GMT
- Title: JL-lemma derived Optimal Projections for Discriminative Dictionary
Learning
- Authors: G.Madhuri, Atul Negi, Kaluri V.Rangarao
- Abstract summary: This paper uses the Johnson-Lindenstrauss (JL) Lemma to select the dimensionality of a transformed space in which a discriminative dictionary can be learned for signal classification.
Unlike state-of-the-art dimensionality reduction-based dictionary learning methods, a projection transformation matrix derived in a single step from M-SPCA provides maximum feature-label consistency.
Experimentation on OCR and face recognition datasets shows relatively better classification performance than other supervised dictionary learning algorithms.
- Score: 0.6138671548064356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: To overcome difficulties in classifying large dimensionality data with a
large number of classes, we propose a novel approach called JLSPCADL. This
paper uses the Johnson-Lindenstrauss (JL) Lemma to select the dimensionality of
a transformed space in which a discriminative dictionary can be learned for
signal classification. Rather than reducing dimensionality via random
projections, as is often done with JL, we use a projection transformation
matrix derived from Modified Supervised PC Analysis (M-SPCA) with the
JL-prescribed dimension.
JLSPCADL provides a heuristic to deduce suitable distortion levels and the
corresponding Suitable Description Length (SDL) of dictionary atoms to derive
an optimal feature space and thus the SDL of dictionary atoms for better
classification. Unlike state-of-the-art dimensionality reduction-based
dictionary learning methods, a projection transformation matrix derived in a
single step from M-SPCA provides maximum feature-label consistency of the
transformed space while preserving the cluster structure of the original data.
Despite confusing pairs, the dictionary for the transformed space generates
discriminative sparse coefficients, with fewer training samples.
Experimentation demonstrates that JLSPCADL scales well with an increasing
number of classes and dimensionality. Improved label consistency of features
due to M-SPCA helps to classify better. Further, the complexity of training a
discriminative dictionary is significantly reduced by using SDL.
Experimentation on OCR and face recognition datasets shows relatively better
classification performance than other supervised dictionary learning
algorithms.
Related papers
- A Lightweight Randomized Nonlinear Dictionary Learning Method using Random Vector Functional Link [0.6138671548064356]
This paper presents an SVD-free lightweight approach to learning a nonlinear dictionary using a randomized functional link called a Random Vector Functional Link (RVFL)
The proposed RVFL-based nonlinear Dictionary Learning (RVFLDL) learns a dictionary as a sparse-to-dense feature map from nonlinear sparse coefficients to the dense input features.
The empirical evidence of the method illustrated in image classification and reconstruction applications shows that RVFLDL is scalable and provides a solution better than those obtained using other nonlinear dictionary learning methods.
arXiv Detail & Related papers (2024-02-06T09:24:53Z) - An Efficient Approximate Method for Online Convolutional Dictionary
Learning [32.90534837348151]
We present a novel approximate OCDL method that incorporates sparse decomposition of the training samples.
The proposed method substantially reduces computational costs while preserving the effectiveness of the state-of-the-art OCDL algorithms.
arXiv Detail & Related papers (2023-01-25T13:40:18Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - Locality Constrained Analysis Dictionary Learning via K-SVD Algorithm [6.162666237389167]
We propose a novel locality constrained analysis dictionary learning model with a synthesis K-SVD algorithm (SK-LADL)
It considers intrinsic geometric properties by imposing graph regularization to uncover the geometric structure for the image data.
Through the learned analysis dictionary, we transform the image to a new and compact space where the manifold assumption can be further guaranteed.
arXiv Detail & Related papers (2021-04-29T05:58:34Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z) - Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies [60.285091454321055]
We design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix.
On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes.
arXiv Detail & Related papers (2020-03-18T13:07:51Z) - Learning Hybrid Representation by Robust Dictionary Learning in
Factorized Compressed Space [84.37923242430999]
We investigate the robust dictionary learning (DL) to discover the hybrid salient low-rank and sparse representation in a factorized compressed space.
A Joint Robust Factorization and Projective Dictionary Learning (J-RFDL) model is presented.
arXiv Detail & Related papers (2019-12-26T06:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.