Dictionary Learning with Low-rank Coding Coefficients for Tensor
Completion
- URL: http://arxiv.org/abs/2009.12507v2
- Date: Sun, 28 Feb 2021 09:36:33 GMT
- Title: Dictionary Learning with Low-rank Coding Coefficients for Tensor
Completion
- Authors: Tai-Xiang Jiang, Xi-Le Zhao, Hao Zhang, Michael K. Ng
- Abstract summary: Our model is to learn a data-adaptive dictionary from the given observations.
In the completion process, we minimize the low-rankness of each tensor slice containing the coding coefficients.
- Score: 33.068635237011236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel tensor learning and coding model for
third-order data completion. Our model is to learn a data-adaptive dictionary
from the given observations, and determine the coding coefficients of
third-order tensor tubes. In the completion process, we minimize the
low-rankness of each tensor slice containing the coding coefficients. By
comparison with the traditional pre-defined transform basis, the advantages of
the proposed model are that (i) the dictionary can be learned based on the
given data observations so that the basis can be more adaptively and accurately
constructed, and (ii) the low-rankness of the coding coefficients can allow the
linear combination of dictionary features more effectively. Also we develop a
multi-block proximal alternating minimization algorithm for solving such tensor
learning and coding model, and show that the sequence generated by the
algorithm can globally converge to a critical point. Extensive experimental
results for real data sets such as videos, hyperspectral images, and traffic
data are reported to demonstrate these advantages and show the performance of
the proposed tensor learning and coding method is significantly better than the
other tensor completion methods in terms of several evaluation metrics.
Related papers
- Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - Distributive Pre-Training of Generative Modeling Using Matrix-Product
States [0.0]
We consider an alternative training scheme utilizing basic tensor network operations, e.g., summation and compression.
The training algorithm is based on compressing the superposition state constructed from all the training data in product state representation.
We benchmark the algorithm on the MNIST dataset and show reasonable results for generating new images and classification tasks.
arXiv Detail & Related papers (2023-06-26T15:46:08Z) - An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws [24.356906682593532]
We study the compute-optimal trade-off between model and training data set sizes for large neural networks.
Our result suggests a linear relation similar to that supported by the empirical analysis of chinchilla.
arXiv Detail & Related papers (2022-12-02T18:46:41Z) - What learning algorithm is in-context learning? Investigations with
linear models [87.91612418166464]
We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly.
We show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression.
Preliminary evidence that in-context learners share algorithmic features with these predictors.
arXiv Detail & Related papers (2022-11-28T18:59:51Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Variational Sparse Coding with Learned Thresholding [6.737133300781134]
We propose a new approach to variational sparse coding that allows us to learn sparse distributions by thresholding samples.
We first evaluate and analyze our method by training a linear generator, showing that it has superior performance, statistical efficiency, and gradient estimation.
arXiv Detail & Related papers (2022-05-07T14:49:50Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Deep learning based dictionary learning and tomographic image
reconstruction [0.0]
This work presents an approach for image reconstruction in clinical low-dose tomography that combines principles from sparse signal processing with ideas from deep learning.
First, we describe sparse signal representation in terms of dictionaries from a statistical perspective and interpret dictionary learning as a process of aligning distribution that arises from a generative model with empirical distribution of true signals.
As a result we can see that sparse coding with learned dictionaries resembles a specific variational autoencoder, where the decoder is a linear function and the encoder is a sparse coding algorithm.
arXiv Detail & Related papers (2021-08-26T12:10:17Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - stream-learn -- open-source Python library for difficult data stream
batch analysis [0.0]
stream-learn is compatible with scikit-learn and developed for the drifting and imbalanced data stream analysis.
Main component is a stream generator, which allows to produce a synthetic data stream.
In addition, estimators adapted for data stream classification have been implemented.
arXiv Detail & Related papers (2020-01-29T20:15:09Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.