Online multidimensional dictionary learning
- URL: http://arxiv.org/abs/2503.09337v1
- Date: Wed, 12 Mar 2025 12:31:29 GMT
- Title: Online multidimensional dictionary learning
- Authors: Ferdaous Ait Addi, Abdeslem Hafid Bentbib, Khalide Jbilou,
- Abstract summary: We propose a generalization of the dictionary learning technique using the t-product framework.<n>We address the dictionary learning problem through online methods suitable for tensor structures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dictionary learning is a widely used technique in signal processing and machine learning that aims to represent data as a linear combination of a few elements from an overcomplete dictionary. In this work, we propose a generalization of the dictionary learning technique using the t-product framework, enabling efficient handling of multidimensional tensor data. We address the dictionary learning problem through online methods suitable for tensor structures. To effectively address the sparsity problem, we utilize an accelerated Iterative Shrinkage-Thresholding Algorithm (ISTA) enhanced with an extrapolation technique known as Anderson acceleration. This approach significantly improves signal reconstruction results. Extensive experiments prove that our proposed method outperforms existing acceleration techniques, particularly in applications such as data completion. These results suggest that our approach can be highly beneficial for large-scale tensor data analysis in various domains.
Related papers
- Recognition of Geometrical Shapes by Dictionary Learning [49.30082271910632]
We present a first approach to make dictionary learning work for shape recognition.
The choice of the underlying optimization method has a significant impact on recognition quality.
Experimental results confirm that dictionary learning may be an interesting method for shape recognition tasks.
arXiv Detail & Related papers (2025-04-15T08:05:16Z) - Revisiting Nearest Neighbor for Tabular Data: A Deep Tabular Baseline Two Decades Later [76.66498833720411]
We introduce a differentiable version of $K$-nearest neighbors (KNN) originally designed to learn a linear projection to capture semantic similarities between instances.<n>Surprisingly, our implementation of NCA using SGD and without dimensionality reduction already achieves decent performance on tabular data.<n>We conclude our paper by analyzing the factors behind these improvements, including loss functions, prediction strategies, and deep architectures.
arXiv Detail & Related papers (2024-07-03T16:38:57Z) - An Analysis of BPE Vocabulary Trimming in Neural Machine Translation [56.383793805299234]
vocabulary trimming is a postprocessing step that replaces rare subwords with their component subwords.
We show that vocabulary trimming fails to improve performance and is even prone to incurring heavy degradation.
arXiv Detail & Related papers (2024-03-30T15:29:49Z) - Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution [62.71425232332837]
We show that training amortized models with noisy labels is inexpensive and surprisingly effective.
This approach significantly accelerates several feature attribution and data valuation methods, often yielding an order of magnitude speedup over existing approaches.
arXiv Detail & Related papers (2024-01-29T03:42:37Z) - Explainable Trajectory Representation through Dictionary Learning [7.567576186354494]
Trajectory representation learning on a network enhances our understanding of vehicular traffic patterns.
Existing approaches using classic machine learning or deep learning embed trajectories as dense vectors, which lack interpretability.
This paper proposes an explainable trajectory representation learning framework through dictionary learning.
arXiv Detail & Related papers (2023-12-13T10:59:54Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - An Efficient Approximate Method for Online Convolutional Dictionary
Learning [32.90534837348151]
We present a novel approximate OCDL method that incorporates sparse decomposition of the training samples.
The proposed method substantially reduces computational costs while preserving the effectiveness of the state-of-the-art OCDL algorithms.
arXiv Detail & Related papers (2023-01-25T13:40:18Z) - Discriminative Dictionary Learning based on Statistical Methods [0.0]
Sparse Representation (SR) of signals or data has a well founded theory with rigorous mathematical error bounds and proofs.
Training dictionaries such that they represent each class of signals with minimal loss is called Dictionary Learning (DL)
MOD and K-SVD have been successfully used in reconstruction based applications in image processing like image "denoising", "inpainting"
arXiv Detail & Related papers (2021-11-17T10:45:10Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - Online Orthogonal Dictionary Learning Based on Frank-Wolfe Method [3.198144010381572]
Dictionary learning is a widely used unsupervised learning method in signal processing and machine learning.
The proposed scheme includes a novel problem formulation and an efficient online algorithm design with convergence analysis.
Experiments with synthetic data and real-world sensor readings demonstrate the effectiveness and efficiency of the proposed scheme.
arXiv Detail & Related papers (2021-03-02T05:49:23Z) - Accelerating Text Mining Using Domain-Specific Stop Word Lists [57.76576681191192]
We present a novel approach for the automatic extraction of domain-specific words called the hyperplane-based approach.
The hyperplane-based approach can significantly reduce text dimensionality by eliminating irrelevant features.
Results indicate that the hyperplane-based approach can reduce the dimensionality of the corpus by 90% and outperforms mutual information.
arXiv Detail & Related papers (2020-11-18T17:42:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.