Efficient Sparse Coding using Hierarchical Riemannian Pursuit
- URL: http://arxiv.org/abs/2104.10314v1
- Date: Wed, 21 Apr 2021 02:16:44 GMT
- Title: Efficient Sparse Coding using Hierarchical Riemannian Pursuit
- Authors: Ye Xue, Vincent Lau, and Songfu Cai
- Abstract summary: Sparse coding is a class of unsupervised methods for learning a representation of the input data in the form of a linear combination of a dictionary and a code.
We propose an efficient synthetic state scheme for sparse coding tasks with a complete dictionary.
- Score: 2.4087148947930634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse coding is a class of unsupervised methods for learning a sparse
representation of the input data in the form of a linear combination of a
dictionary and a sparse code. This learning framework has led to
state-of-the-art results in various image and video processing tasks. However,
classical methods learn the dictionary and the sparse code based on alternative
optimizations, usually without theoretical guarantees for either optimality or
convergence due to non-convexity of the problem. Recent works on sparse coding
with a complete dictionary provide strong theoretical guarantees thanks to the
development of the non-convex optimization. However, initial non-convex
approaches learn the dictionary in the sparse coding problem sequentially in an
atom-by-atom manner, which leads to a long execution time. More recent works
seek to directly learn the entire dictionary at once, which substantially
reduces the execution time. However, the associated recovery performance is
degraded with a finite number of data samples. In this paper, we propose an
efficient sparse coding scheme with a two-stage optimization. The proposed
scheme leverages the global and local Riemannian geometry of the two-stage
optimization problem and facilitates fast implementation for superb dictionary
recovery performance by a finite number of samples without atom-by-atom
calculation. We further prove that, with high probability, the proposed scheme
can exactly recover any atom in the target dictionary with a finite number of
samples if it is adopted to recover one atom of the dictionary. An application
on wireless sensor data compression is also proposed. Experiments on both
synthetic and real-world data verify the efficiency and effectiveness of the
proposed scheme.
Related papers
- Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Simple Alternating Minimization Provably Solves Complete Dictionary
Learning [13.056764072568749]
This paper focuses on complete dictionary problem, where the goal is to reparametrize a set of given signals as linear combinations of atoms from a learned dictionary.
There are two main challenges faced by theoretical and practical dictionary learning: the lack of theoretical guarantees for practically-used algorithms, and poor scalability when dealing with huge-scale datasets.
arXiv Detail & Related papers (2022-10-23T18:30:45Z) - Asymmetric Scalable Cross-modal Hashing [51.309905690367835]
Cross-modal hashing is a successful method to solve large-scale multimedia retrieval issue.
We propose a novel Asymmetric Scalable Cross-Modal Hashing (ASCMH) to address these issues.
Our ASCMH outperforms the state-of-the-art cross-modal hashing methods in terms of accuracy and efficiency.
arXiv Detail & Related papers (2022-07-26T04:38:47Z) - Dictionary Learning Using Rank-One Atomic Decomposition (ROAD) [6.367823813868024]
Dictionary learning aims at seeking a dictionary under which the training data can be sparsely represented.
Road outperforms other benchmark algorithms for both synthetic data and real data.
arXiv Detail & Related papers (2021-10-25T10:29:52Z) - Dictionary Learning with Convex Update (ROMD) [6.367823813868024]
We propose a new type of dictionary learning algorithm called ROMD.
ROMD updates the whole dictionary at a time using convex matrices.
The advantages hence include both guarantees for dictionary update and faster of the whole dictionary learning.
arXiv Detail & Related papers (2021-10-13T11:14:38Z) - Highly Parallel Autoregressive Entity Linking with Discriminative
Correction [51.947280241185]
We propose a very efficient approach that parallelizes autoregressive linking across all potential mentions.
Our model is >70 times faster and more accurate than the previous generative method.
arXiv Detail & Related papers (2021-09-08T17:28:26Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Convolutional Sparse Coding Fast Approximation with Application to
Seismic Reflectivity Estimation [9.005280130480308]
We propose a speed-up upgraded version of the classic iterative thresholding algorithm, that produces a good approximation of the convolutional sparse code within 2-5 iterations.
The performance of the proposed solution is demonstrated via the seismic inversion problem in both synthetic and real data scenarios.
arXiv Detail & Related papers (2021-06-29T12:19:07Z) - Dictionary Learning with Low-rank Coding Coefficients for Tensor
Completion [33.068635237011236]
Our model is to learn a data-adaptive dictionary from the given observations.
In the completion process, we minimize the low-rankness of each tensor slice containing the coding coefficients.
arXiv Detail & Related papers (2020-09-26T02:43:43Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.