Distributed Non-Negative Tensor Train Decomposition
- URL: http://arxiv.org/abs/2008.01340v1
- Date: Tue, 4 Aug 2020 05:35:57 GMT
- Title: Distributed Non-Negative Tensor Train Decomposition
- Authors: Manish Bhattarai, Gopinath Chennupati, Erik Skau, Raviteja Vangara,
Hirsto Djidjev, Boian Alexandrov
- Abstract summary: High-dimensional data is presented as multidimensional arrays, aka tensors.
The presence of latent (not directly observable) structures in the tensor allows a unique representation and compression of the data.
We introduce a distributed non-negative tensor-train and demonstrate its scalability and the compression on synthetic and real-world big datasets.
- Score: 3.2264685979617655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The era of exascale computing opens new venues for innovations and
discoveries in many scientific, engineering, and commercial fields. However,
with the exaflops also come the extra-large high-dimensional data generated by
high-performance computing. High-dimensional data is presented as
multidimensional arrays, aka tensors. The presence of latent (not directly
observable) structures in the tensor allows a unique representation and
compression of the data by classical tensor factorization techniques. However,
the classical tensor methods are not always stable or they can be exponential
in their memory requirements, which makes them not suitable for
high-dimensional tensors. Tensor train (TT) is a state-of-the-art tensor
network introduced for factorization of high-dimensional tensors. TT transforms
the initial high-dimensional tensor in a network of three-dimensional tensors
that requires only a linear storage. Many real-world data, such as, density,
temperature, population, probability, etc., are non-negative and for an easy
interpretation, the algorithms preserving non-negativity are preferred. Here,
we introduce a distributed non-negative tensor-train and demonstrate its
scalability and the compression on synthetic and real-world big datasets.
Related papers
- "Lossless" Compression of Deep Neural Networks: A High-dimensional
Neural Tangent Kernel Approach [49.744093838327615]
We provide a novel compression approach to wide and fully-connected emphdeep neural nets.
Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme.
arXiv Detail & Related papers (2024-03-01T03:46:28Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - The Tensor as an Informational Resource [1.3044677039636754]
A tensor is a multidimensional array of numbers that can be used to store data, encode a computational relation and represent quantum entanglement.
We propose a family of information-theoretically constructed preorders on tensors, which can be used to compare tensors with each other and to assess the existence of transformations between them.
arXiv Detail & Related papers (2023-11-03T18:47:39Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Sample Efficient Learning of Factored Embeddings of Tensor Fields [3.0072182643196217]
We learn approximate full-rank and compact tensor sketches with decompositive representations.
All information querying and post-processing on the original tensor field can now be achieved more efficiently.
arXiv Detail & Related papers (2022-09-01T11:32:00Z) - Multi-version Tensor Completion for Time-delayed Spatio-temporal Data [50.762087239885936]
Real-world-temporal data is often incomplete or inaccurate due to various data loading delays.
We propose a low-rank tensor model to predict the updates over time.
We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method.
arXiv Detail & Related papers (2021-05-11T19:55:56Z) - Low-Rank and Sparse Enhanced Tucker Decomposition for Tensor Completion [3.498620439731324]
We introduce a unified low-rank and sparse enhanced Tucker decomposition model for tensor completion.
Our model possesses a sparse regularization term to promote a sparse core tensor, which is beneficial for tensor data compression.
It is remarkable that our model is able to deal with different types of real-world data sets, since it exploits the potential periodicity and inherent correlation properties appeared in tensors.
arXiv Detail & Related papers (2020-10-01T12:45:39Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Anomaly Detection with Tensor Networks [2.3895981099137535]
We exploit the memory and computational efficiency of tensor networks to learn a linear transformation over a space with a dimension exponential in the number of original features.
We produce competitive results on image datasets, despite not exploiting the locality of images.
arXiv Detail & Related papers (2020-06-03T20:41:30Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.