Learning Deep Analysis Dictionaries for Image Super-Resolution
- URL: http://arxiv.org/abs/2001.12010v2
- Date: Tue, 10 Nov 2020 06:37:37 GMT
- Title: Learning Deep Analysis Dictionaries for Image Super-Resolution
- Authors: Jun-Jie Huang and Pier Luigi Dragotti
- Abstract summary: Deep Analysis dictionary Model (DeepAM) is optimized to address a specific regression task known as single image super-resolution.
Our architecture contains L layers of analysis dictionary and soft-thresholding operators.
DeepAM uses both supervised and unsupervised setup.
- Score: 38.7315182732103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the recent success of deep neural networks and the recent efforts
to develop multi-layer dictionary models, we propose a Deep Analysis dictionary
Model (DeepAM) which is optimized to address a specific regression task known
as single image super-resolution. Contrary to other multi-layer dictionary
models, our architecture contains L layers of analysis dictionary and
soft-thresholding operators to gradually extract high-level features and a
layer of synthesis dictionary which is designed to optimize the regression task
at hand. In our approach, each analysis dictionary is partitioned into two
sub-dictionaries: an Information Preserving Analysis Dictionary (IPAD) and a
Clustering Analysis Dictionary (CAD). The IPAD together with the corresponding
soft-thresholds is designed to pass the key information from the previous layer
to the next layer, while the CAD together with the corresponding
soft-thresholding operator is designed to produce a sparse feature
representation of its input data that facilitates discrimination of key
features. DeepAM uses both supervised and unsupervised setup. Simulation
results show that the proposed deep analysis dictionary model achieves better
performance compared to a deep neural network that has the same structure and
is optimized using back-propagation when training datasets are small.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - A variational autoencoder-based nonnegative matrix factorisation model
for deep dictionary learning [13.796655751448288]
Construction of dictionaries using nonnegative matrix factorisation (NMF) has extensive applications in signal processing and machine learning.
We propose a probabilistic generative model which employs a variational autoencoder (VAE) to perform nonnegative dictionary learning.
arXiv Detail & Related papers (2023-01-18T02:36:03Z) - A Unified Understanding of Deep NLP Models for Text Classification [88.35418976241057]
We have developed a visual analysis tool, DeepNLPVis, to enable a unified understanding of NLP models for text classification.
The key idea is a mutual information-based measure, which provides quantitative explanations on how each layer of a model maintains the information of input words in a sample.
A multi-level visualization, which consists of a corpus-level, a sample-level, and a word-level visualization, supports the analysis from the overall training set to individual samples.
arXiv Detail & Related papers (2022-06-19T08:55:07Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z) - Locality Constrained Analysis Dictionary Learning via K-SVD Algorithm [6.162666237389167]
We propose a novel locality constrained analysis dictionary learning model with a synthesis K-SVD algorithm (SK-LADL)
It considers intrinsic geometric properties by imposing graph regularization to uncover the geometric structure for the image data.
Through the learned analysis dictionary, we transform the image to a new and compact space where the manifold assumption can be further guaranteed.
arXiv Detail & Related papers (2021-04-29T05:58:34Z) - Deep Semantic Dictionary Learning for Multi-label Image Classification [3.3989824361632337]
We present an innovative path towards the solution of the multi-label image classification which considers it as a dictionary learning task.
A novel end-to-end model named Deep Semantic Dictionary Learning (DSDL) is designed.
Our codes and models have been released.
arXiv Detail & Related papers (2020-12-23T06:22:47Z) - Keyphrase Extraction with Dynamic Graph Convolutional Networks and
Diversified Inference [50.768682650658384]
Keyphrase extraction (KE) aims to summarize a set of phrases that accurately express a concept or a topic covered in a given document.
Recent Sequence-to-Sequence (Seq2Seq) based generative framework is widely used in KE task, and it has obtained competitive performance on various benchmarks.
In this paper, we propose to adopt the Dynamic Graph Convolutional Networks (DGCN) to solve the above two problems simultaneously.
arXiv Detail & Related papers (2020-10-24T08:11:23Z) - Learning Deep Analysis Dictionaries -- Part II: Convolutional
Dictionaries [38.7315182732103]
We introduce a Deep Convolutional Analysis Dictionary Model (DeepCAM) by learning convolutional dictionaries instead of unstructured dictionaries.
A L-layer DeepCAM consists of L layers of convolutional analysis dictionary and element-wise soft-thresholding pairs.
We demonstrate that DeepCAM is an effective multilayer convolutional model and, on single image super-resolution, achieves performance comparable with other methods.
arXiv Detail & Related papers (2020-01-31T19:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.