Discriminative Dictionary Learning based on Statistical Methods
- URL: http://arxiv.org/abs/2111.09027v1
- Date: Wed, 17 Nov 2021 10:45:10 GMT
- Title: Discriminative Dictionary Learning based on Statistical Methods
- Authors: G.Madhuri, Atul Negi
- Abstract summary: Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sparse Representation (SR) of signals or data has a well founded theory with
rigorous mathematical error bounds and proofs. SR of a signal is given by
superposition of very few columns of a matrix called Dictionary, implicitly
reducing dimensionality. Training dictionaries such that they represent each
class of signals with minimal loss is called Dictionary Learning (DL).
Dictionary learning methods like Method of Optimal Directions (MOD) and K-SVD
have been successfully used in reconstruction based applications in image
processing like image "denoising", "inpainting" and others. Other dictionary
learning algorithms such as Discriminative K-SVD and Label Consistent K-SVD are
supervised learning methods based on K-SVD. In our experience, one of the
drawbacks of current methods is that the classification performance is not
impressive on datasets like Telugu OCR datasets, with large number of classes
and high dimensionality. There is scope for improvement in this direction and
many researchers have used statistical methods to design dictionaries for
classification. This chapter presents a review of statistical techniques and
their application to learning discriminative dictionaries. The objective of the
methods described here is to improve classification using sparse
representation. In this chapter a hybrid approach is described, where sparse
coefficients of input data are generated. We use a simple three layer Multi
Layer Perceptron with back-propagation training as a classifier with those
sparse codes as input. The results are quite comparable with other computation
intensive methods.
Keywords: Statistical modeling, Dictionary Learning, Discriminative
Dictionary, Sparse representation, Gaussian prior, Cauchy prior, Entropy,
Hidden Markov model, Hybrid Dictionary Learning
Related papers
- A Lightweight Randomized Nonlinear Dictionary Learning Method using Random Vector Functional Link [0.6138671548064356]
This paper presents an SVD-free lightweight approach to learning a nonlinear dictionary using a randomized functional link called a Random Vector Functional Link (RVFL)
The proposed RVFL-based nonlinear Dictionary Learning (RVFLDL) learns a dictionary as a sparse-to-dense feature map from nonlinear sparse coefficients to the dense input features.
The empirical evidence of the method illustrated in image classification and reconstruction applications shows that RVFLDL is scalable and provides a solution better than those obtained using other nonlinear dictionary learning methods.
arXiv Detail & Related papers (2024-02-06T09:24:53Z) - Towards Realistic Zero-Shot Classification via Self Structural Semantic
Alignment [53.2701026843921]
Large-scale pre-trained Vision Language Models (VLMs) have proven effective for zero-shot classification.
In this paper, we aim at a more challenging setting, Realistic Zero-Shot Classification, which assumes no annotation but instead a broad vocabulary.
We propose the Self Structural Semantic Alignment (S3A) framework, which extracts structural semantic information from unlabeled data while simultaneously self-learning.
arXiv Detail & Related papers (2023-08-24T17:56:46Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Lexically Aware Semi-Supervised Learning for OCR Post-Correction [90.54336622024299]
Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents.
Previous work has demonstrated the utility of neural post-correction methods on recognition of less-well-resourced languages.
We present a semi-supervised learning method that makes it possible to utilize raw images to improve performance.
arXiv Detail & Related papers (2021-11-04T04:39:02Z) - Dash: Semi-Supervised Learning with Dynamic Thresholding [72.74339790209531]
We propose a semi-supervised learning (SSL) approach that uses unlabeled examples to train models.
Our proposed approach, Dash, enjoys its adaptivity in terms of unlabeled data selection.
arXiv Detail & Related papers (2021-09-01T23:52:29Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - Exact Sparse Orthogonal Dictionary Learning [8.577876545575828]
We find that our method can result in better denoising results than over-complete dictionary based learning methods.
Our method has the additional advantage of high efficiency.
arXiv Detail & Related papers (2021-03-14T07:51:32Z) - DLDL: Dynamic Label Dictionary Learning via Hypergraph Regularization [17.34373273007931]
We propose a Dynamic Label Dictionary Learning (DLDL) algorithm to generate the soft label matrix for unlabeled data.
Specifically, we employ hypergraph manifold regularization to keep the relations among original data, transformed data, and soft labels consistent.
arXiv Detail & Related papers (2020-10-23T14:07:07Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.