Exact Sparse Orthogonal Dictionary Learning
- URL: http://arxiv.org/abs/2103.09085v1
- Date: Sun, 14 Mar 2021 07:51:32 GMT
- Title: Exact Sparse Orthogonal Dictionary Learning
- Authors: Kai Liu, Yongjian Zhao, Hua Wang
- Abstract summary: We find that our method can result in better denoising results than over-complete dictionary based learning methods.
Our method has the additional advantage of high efficiency.
- Score: 8.577876545575828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, learning a dictionary from input images for sparse
modeling has been one of the topics which receive most research attention in
image processing and compressed sensing. Most existing dictionary learning
methods consider an over-complete dictionary, such as the K-SVD method, which
may result in high mutual incoherence and therefore has a negative impact in
recognition. On the other side, the sparse codes are usually optimized by
adding the $\ell_0$ or $\ell_1$-norm penalty, but with no strict sparsity
guarantee. In this paper, we propose an orthogonal dictionary learning model
which can obtain strictly sparse codes and orthogonal dictionary with global
sequence convergence guarantee. We find that our method can result in better
denoising results than over-complete dictionary based learning methods, and has
the additional advantage of high computation efficiency.
Related papers
- An Analysis of BPE Vocabulary Trimming in Neural Machine Translation [56.383793805299234]
vocabulary trimming is a postprocessing step that replaces rare subwords with their component subwords.
We show that vocabulary trimming fails to improve performance and is even prone to incurring heavy degradation.
arXiv Detail & Related papers (2024-03-30T15:29:49Z) - Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT [59.245414547751636]
We propose a circuit discovery framework alternative to activation patching.
Our framework suffers less from out-of-distribution and proves to be more efficient in terms of complexity.
We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
arXiv Detail & Related papers (2024-02-19T15:04:53Z) - Bayesian sparsity and class sparsity priors for dictionary learning and
coding [0.0]
We propose a work flow to facilitate the dictionary matching process.
In this article, we propose a new Bayesian data-driven group sparsity coding method to help identify subdictionaries that are not relevant for the dictionary matching.
The effectiveness of compensating for the dictionary compression error and using the novel group sparsity promotion to deflate the original dictionary are illustrated.
arXiv Detail & Related papers (2023-09-02T17:54:23Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Efficient CNN with uncorrelated Bag of Features pooling [98.78384185493624]
Bag of Features (BoF) has been recently proposed to reduce the complexity of convolution layers.
We propose an approach that builds on top of BoF pooling to boost its efficiency by ensuring that the items of the learned dictionary are non-redundant.
The proposed strategy yields an efficient variant of BoF and further boosts its performance, without any additional parameters.
arXiv Detail & Related papers (2022-09-22T09:00:30Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Lexically Aware Semi-Supervised Learning for OCR Post-Correction [90.54336622024299]
Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents.
Previous work has demonstrated the utility of neural post-correction methods on recognition of less-well-resourced languages.
We present a semi-supervised learning method that makes it possible to utilize raw images to improve performance.
arXiv Detail & Related papers (2021-11-04T04:39:02Z) - Dictionary Learning with Convex Update (ROMD) [6.367823813868024]
We propose a new type of dictionary learning algorithm called ROMD.
ROMD updates the whole dictionary at a time using convex matrices.
The advantages hence include both guarantees for dictionary update and faster of the whole dictionary learning.
arXiv Detail & Related papers (2021-10-13T11:14:38Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning
and Coding Network for Image Recognition with Limited Data [74.75557280245643]
We present a new Deep Dictionary Learning and Coding Network (DDLCN) for image recognition tasks with limited data.
We empirically compare DDLCN with several leading dictionary learning methods and deep learning models.
Experimental results on five popular datasets show that DDLCN achieves competitive results compared with state-of-the-art methods when the training data is limited.
arXiv Detail & Related papers (2020-05-21T23:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.