Learning Deep Analysis Dictionaries -- Part II: Convolutional
Dictionaries
- URL: http://arxiv.org/abs/2002.00022v1
- Date: Fri, 31 Jan 2020 19:02:10 GMT
- Title: Learning Deep Analysis Dictionaries -- Part II: Convolutional
Dictionaries
- Authors: Jun-Jie Huang and Pier Luigi Dragotti
- Abstract summary: We introduce a Deep Convolutional Analysis Dictionary Model (DeepCAM) by learning convolutional dictionaries instead of unstructured dictionaries.
A L-layer DeepCAM consists of L layers of convolutional analysis dictionary and element-wise soft-thresholding pairs.
We demonstrate that DeepCAM is an effective multilayer convolutional model and, on single image super-resolution, achieves performance comparable with other methods.
- Score: 38.7315182732103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a Deep Convolutional Analysis Dictionary Model
(DeepCAM) by learning convolutional dictionaries instead of unstructured
dictionaries as in the case of deep analysis dictionary model introduced in the
companion paper. Convolutional dictionaries are more suitable for processing
high-dimensional signals like for example images and have only a small number
of free parameters. By exploiting the properties of a convolutional dictionary,
we present an efficient convolutional analysis dictionary learning approach. A
L-layer DeepCAM consists of L layers of convolutional analysis dictionary and
element-wise soft-thresholding pairs and a single layer of convolutional
synthesis dictionary. Similar to DeepAM, each convolutional analysis dictionary
is composed of a convolutional Information Preserving Analysis Dictionary
(IPAD) and a convolutional Clustering Analysis Dictionary (CAD). The IPAD and
the CAD are learned using variations of the proposed learning algorithm. We
demonstrate that DeepCAM is an effective multilayer convolutional model and, on
single image super-resolution, achieves performance comparable with other
methods while also showing good generalization capabilities.
Related papers
- Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT [59.245414547751636]
We propose a circuit discovery framework alternative to activation patching.
Our framework suffers less from out-of-distribution and proves to be more efficient in terms of complexity.
We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
arXiv Detail & Related papers (2024-02-19T15:04:53Z) - Convergence of alternating minimisation algorithms for dictionary
learning [4.5687771576879594]
We derive sufficient conditions for the convergence of two popular alternating minimisation algorithms for dictionary learning.
We show that given a well-behaved initialisation that is either within distance at most $1/log(K)$ to the generating dictionary or has a special structure ensuring that each element of the initialisation only points to one generating element, both algorithms will converge with geometric convergence rate to the generating dictionary.
arXiv Detail & Related papers (2023-04-04T12:58:47Z) - Learning Invariant Subspaces of Koopman Operators--Part 1: A Methodology
for Demonstrating a Dictionary's Approximate Subspace Invariance [0.0]
In a widely used algorithm, Extended Dynamic Mode Decomposition, the dictionary functions are drawn from a fixed class of functions.
Deep learning combined with EDMD has been used to learn novel dictionary functions in an algorithm called deep dynamic mode decomposition (deepDMD)
In this paper we analyze the learned dictionaries from deepDMD and explore the theoretical basis for their strong performance.
arXiv Detail & Related papers (2022-12-14T17:33:52Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning
and Coding Network for Image Recognition with Limited Data [74.75557280245643]
We present a new Deep Dictionary Learning and Coding Network (DDLCN) for image recognition tasks with limited data.
We empirically compare DDLCN with several leading dictionary learning methods and deep learning models.
Experimental results on five popular datasets show that DDLCN achieves competitive results compared with state-of-the-art methods when the training data is limited.
arXiv Detail & Related papers (2020-05-21T23:12:10Z) - Learning Deep Analysis Dictionaries for Image Super-Resolution [38.7315182732103]
Deep Analysis dictionary Model (DeepAM) is optimized to address a specific regression task known as single image super-resolution.
Our architecture contains L layers of analysis dictionary and soft-thresholding operators.
DeepAM uses both supervised and unsupervised setup.
arXiv Detail & Related papers (2020-01-31T18:59:35Z) - Lexical Sememe Prediction using Dictionary Definitions by Capturing
Local Semantic Correspondence [94.79912471702782]
Sememes, defined as the minimum semantic units of human languages, have been proven useful in many NLP tasks.
We propose a Sememe Correspondence Pooling (SCorP) model, which is able to capture this kind of matching to predict sememes.
We evaluate our model and baseline methods on a famous sememe KB HowNet and find that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-01-16T17:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.