An Efficient Approximate Method for Online Convolutional Dictionary
Learning
- URL: http://arxiv.org/abs/2301.10583v1
- Date: Wed, 25 Jan 2023 13:40:18 GMT
- Title: An Efficient Approximate Method for Online Convolutional Dictionary
Learning
- Authors: Farshad G. Veshki and Sergiy A. Vorobyov
- Abstract summary: We present a novel approximate OCDL method that incorporates sparse decomposition of the training samples.
The proposed method substantially reduces computational costs while preserving the effectiveness of the state-of-the-art OCDL algorithms.
- Score: 32.90534837348151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing convolutional dictionary learning (CDL) algorithms are based on
batch learning, where the dictionary filters and the convolutional sparse
representations are optimized in an alternating manner using a training
dataset. When large training datasets are used, batch CDL algorithms become
prohibitively memory-intensive. An online-learning technique is used to reduce
the memory requirements of CDL by optimizing the dictionary incrementally after
finding the sparse representations of each training sample. Nevertheless,
learning large dictionaries using the existing online CDL (OCDL) algorithms
remains highly computationally expensive. In this paper, we present a novel
approximate OCDL method that incorporates sparse decomposition of the training
samples. The resulting optimization problems are addressed using the
alternating direction method of multipliers. Extensive experimental evaluations
using several image datasets show that the proposed method substantially
reduces computational costs while preserving the effectiveness of the
state-of-the-art OCDL algorithms.
Related papers
- Sparse Dictionary Learning for Image Recovery by Iterative Shrinkage [0.1433758865948252]
We study the sparse coding problem in the context of sparse dictionary learning for image recovery.
We consider and compare several state-of-the-art sparse optimization methods constructed using the shrinkage operation.
arXiv Detail & Related papers (2025-03-13T13:45:37Z) - Online multidimensional dictionary learning [0.0]
We propose a generalization of the dictionary learning technique using the t-product framework.
We address the dictionary learning problem through online methods suitable for tensor structures.
arXiv Detail & Related papers (2025-03-12T12:31:29Z) - Learning to Retrieve Iteratively for In-Context Learning [56.40100968649039]
iterative retrieval is a novel framework that empowers retrievers to make iterative decisions through policy optimization.
We instantiate an iterative retriever for composing in-context learning exemplars and apply it to various semantic parsing tasks.
By adding only 4M additional parameters for state encoding, we convert an off-the-shelf dense retriever into a stateful iterative retriever.
arXiv Detail & Related papers (2024-06-20T21:07:55Z) - Multimodal Learned Sparse Retrieval with Probabilistic Expansion Control [66.78146440275093]
Learned retrieval (LSR) is a family of neural methods that encode queries and documents into sparse lexical vectors.
We explore the application of LSR to the multi-modal domain, with a focus on text-image retrieval.
Current approaches like LexLIP and STAIR require complex multi-step training on massive datasets.
Our proposed approach efficiently transforms dense vectors from a frozen dense model into sparse lexical vectors.
arXiv Detail & Related papers (2024-02-27T14:21:56Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Lexically Aware Semi-Supervised Learning for OCR Post-Correction [90.54336622024299]
Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents.
Previous work has demonstrated the utility of neural post-correction methods on recognition of less-well-resourced languages.
We present a semi-supervised learning method that makes it possible to utilize raw images to improve performance.
arXiv Detail & Related papers (2021-11-04T04:39:02Z) - Dictionary Learning Using Rank-One Atomic Decomposition (ROAD) [6.367823813868024]
Dictionary learning aims at seeking a dictionary under which the training data can be sparsely represented.
Road outperforms other benchmark algorithms for both synthetic data and real data.
arXiv Detail & Related papers (2021-10-25T10:29:52Z) - Dual Optimization for Kolmogorov Model Learning Using Enhanced Gradient
Descent [8.714458129632158]
Kolmogorov model (KM) is an interpretable and predictable representation approach to learning the underlying probabilistic structure of a set of random variables.
We propose a computationally scalable KM learning algorithm, based on the regularized dual optimization combined with enhanced gradient descent (GD) method.
It is shown that the accuracy of logical relation mining for interpretability by using the proposed KM learning algorithm exceeds $80%$.
arXiv Detail & Related papers (2021-07-11T10:33:02Z) - FDDH: Fast Discriminative Discrete Hashing for Large-Scale Cross-Modal
Retrieval [41.125141897096874]
Cross-modal hashing is favored for its effectiveness and efficiency.
Most existing methods do not sufficiently exploit the discriminative power of semantic information when learning the hash codes.
We propose Fast Discriminative Discrete Hashing (FDDH) approach for large-scale cross-modal retrieval.
arXiv Detail & Related papers (2021-05-15T03:53:48Z) - Learning the Step-size Policy for the Limited-Memory
Broyden-Fletcher-Goldfarb-Shanno Algorithm [3.7470451129384825]
We consider the problem of how to learn a step-size policy for the Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm.
We propose a neural network architecture with local information of the current gradient as the input.
The step-length policy is learned from data of similar optimization problems, avoids additional evaluations of the objective function, and guarantees that the output step remains inside a pre-defined interval.
arXiv Detail & Related papers (2020-10-03T09:34:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.