Dictionary Learning with Convex Update (ROMD)
- URL: http://arxiv.org/abs/2110.06641v1
- Date: Wed, 13 Oct 2021 11:14:38 GMT
- Title: Dictionary Learning with Convex Update (ROMD)
- Authors: Cheng Cheng and Wei Dai
- Abstract summary: We propose a new type of dictionary learning algorithm called ROMD.
ROMD updates the whole dictionary at a time using convex matrices.
The advantages hence include both guarantees for dictionary update and faster of the whole dictionary learning.
- Score: 6.367823813868024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dictionary learning aims to find a dictionary under which the training data
can be sparsely represented, and it is usually achieved by iteratively applying
two stages: sparse coding and dictionary update. Typical methods for dictionary
update focuses on refining both dictionary atoms and their corresponding sparse
coefficients by using the sparsity patterns obtained from sparse coding stage,
and hence it is a non-convex bilinear inverse problem. In this paper, we
propose a Rank-One Matrix Decomposition (ROMD) algorithm to recast this
challenge into a convex problem by resolving these two variables into a set of
rank-one matrices. Different from methods in the literature, ROMD updates the
whole dictionary at a time using convex programming. The advantages hence
include both convergence guarantees for dictionary update and faster
convergence of the whole dictionary learning. The performance of ROMD is
compared with other benchmark dictionary learning algorithms. The results show
the improvement of ROMD in recovery accuracy, especially in the cases of high
sparsity level and fewer observation data.
Related papers
- Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT [59.245414547751636]
We propose a circuit discovery framework alternative to activation patching.
Our framework suffers less from out-of-distribution and proves to be more efficient in terms of complexity.
We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
arXiv Detail & Related papers (2024-02-19T15:04:53Z) - Bayesian sparsity and class sparsity priors for dictionary learning and
coding [0.0]
We propose a work flow to facilitate the dictionary matching process.
In this article, we propose a new Bayesian data-driven group sparsity coding method to help identify subdictionaries that are not relevant for the dictionary matching.
The effectiveness of compensating for the dictionary compression error and using the novel group sparsity promotion to deflate the original dictionary are illustrated.
arXiv Detail & Related papers (2023-09-02T17:54:23Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Hierarchical Phrase-based Sequence-to-Sequence Learning [94.10257313923478]
We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.
Our approach trains two models: a discriminative derivation based on a bracketing grammar whose tree hierarchically aligns source and target phrases, and a neural seq2seq model that learns to translate the aligned phrases one-by-one.
arXiv Detail & Related papers (2022-11-15T05:22:40Z) - Efficient CNN with uncorrelated Bag of Features pooling [98.78384185493624]
Bag of Features (BoF) has been recently proposed to reduce the complexity of convolution layers.
We propose an approach that builds on top of BoF pooling to boost its efficiency by ensuring that the items of the learned dictionary are non-redundant.
The proposed strategy yields an efficient variant of BoF and further boosts its performance, without any additional parameters.
arXiv Detail & Related papers (2022-09-22T09:00:30Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Dictionary Learning Using Rank-One Atomic Decomposition (ROAD) [6.367823813868024]
Dictionary learning aims at seeking a dictionary under which the training data can be sparsely represented.
Road outperforms other benchmark algorithms for both synthetic data and real data.
arXiv Detail & Related papers (2021-10-25T10:29:52Z) - Exact Sparse Orthogonal Dictionary Learning [8.577876545575828]
We find that our method can result in better denoising results than over-complete dictionary based learning methods.
Our method has the additional advantage of high efficiency.
arXiv Detail & Related papers (2021-03-14T07:51:32Z) - Accelerating Text Mining Using Domain-Specific Stop Word Lists [57.76576681191192]
We present a novel approach for the automatic extraction of domain-specific words called the hyperplane-based approach.
The hyperplane-based approach can significantly reduce text dimensionality by eliminating irrelevant features.
Results indicate that the hyperplane-based approach can reduce the dimensionality of the corpus by 90% and outperforms mutual information.
arXiv Detail & Related papers (2020-11-18T17:42:32Z) - When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning
and Coding Network for Image Recognition with Limited Data [74.75557280245643]
We present a new Deep Dictionary Learning and Coding Network (DDLCN) for image recognition tasks with limited data.
We empirically compare DDLCN with several leading dictionary learning methods and deep learning models.
Experimental results on five popular datasets show that DDLCN achieves competitive results compared with state-of-the-art methods when the training data is limited.
arXiv Detail & Related papers (2020-05-21T23:12:10Z) - Pairwise Supervised Hashing with Bernoulli Variational Auto-Encoder and
Self-Control Gradient Estimator [62.26981903551382]
Variational auto-encoders (VAEs) with binary latent variables provide state-of-the-art performance in terms of precision for document retrieval.
We propose a pairwise loss function with discrete latent VAE to reward within-class similarity and between-class dissimilarity for supervised hashing.
This new semantic hashing framework achieves superior performance compared to the state-of-the-arts.
arXiv Detail & Related papers (2020-05-21T06:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.