tntorch: Tensor Network Learning with PyTorch
- URL: http://arxiv.org/abs/2206.11128v1
- Date: Wed, 22 Jun 2022 14:19:15 GMT
- Title: tntorch: Tensor Network Learning with PyTorch
- Authors: Mikhail Usvyatsov, Rafael Ballester-Ripoll, Konrad Schindler
- Abstract summary: tntorch is a tensor learning framework that supports multiple decompositions.
It implements differentiable tensor algebra, rank truncation, cross-approximation, batch processing, comprehensive tensor arithmetics, and more.
- Score: 26.544996974928583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present tntorch, a tensor learning framework that supports multiple
decompositions (including Candecomp/Parafac, Tucker, and Tensor Train) under a
unified interface. With our library, the user can learn and handle low-rank
tensors with automatic differentiation, seamless GPU support, and the
convenience of PyTorch's API. Besides decomposition algorithms, tntorch
implements differentiable tensor algebra, rank truncation, cross-approximation,
batch processing, comprehensive tensor arithmetics, and more.
Related papers
- Scorch: A Library for Sparse Deep Learning [41.62614683452247]
We introduce Scorch, a library that seamlessly integrates efficient sparse tensor computation into the PyTorch ecosystem.
Scorcher provides a flexible and intuitive interface for sparse tensors, supporting diverse sparse data structures.
We demonstrate Scorch's ease of use and performance gains on diverse deep learning models across multiple domains.
arXiv Detail & Related papers (2024-05-27T06:59:20Z) - Scalable CP Decomposition for Tensor Learning using GPU Tensor Cores [47.87810316745786]
We propose a compression-based tensor decomposition framework, namely the exascale-tensor, to support exascale tensor decomposition.
Compared to the baselines, the exascale-tensor supports 8,000x larger tensors and a speedup up to 6.95x.
We also apply our method to two real-world applications, including gene analysis and tensor layer neural networks.
arXiv Detail & Related papers (2023-11-22T21:04:59Z) - Symbolically integrating tensor networks over various random tensors by
the second version of Python RTNI [0.5439020425818999]
We are upgrading the Python-version of RTNI, which symbolically integrates tensor networks over the Haar-distributed unitary matrices.
Now, PyRTNI2 can treat the Haar-distributed matrices and the real and complex normal Gaussian tensors as well.
In this paper, we explain maths behind the program and show what kind of tensor network calculations can be made with it.
arXiv Detail & Related papers (2023-09-03T13:14:46Z) - TensorKrowch: Smooth integration of tensor networks in machine learning [46.0920431279359]
We introduceKrowch, an open source Python library built on top of PyTorch.
Krowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models.
arXiv Detail & Related papers (2023-06-14T15:55:19Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - PyTorchVideo: A Deep Learning Library for Video Understanding [71.89124881732015]
PyTorchVideo is an open-source deep-learning library for video understanding tasks.
It covers a full stack of video understanding tools including multimodal data loading, transformations, and models.
The library is based on PyTorch and can be used by any training framework.
arXiv Detail & Related papers (2021-11-18T18:59:58Z) - The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal
Padding [14.635810503599759]
CoRa is a tensor compiler that allows users to easily generate efficient code for ragged tensor operators.
We evaluate CoRa on a variety of operators on ragged tensors as well as on an encoder layer of the transformer model.
arXiv Detail & Related papers (2021-10-19T19:39:04Z) - DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
Learning [79.89085533866071]
This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors.
DeepReduce decomposes tensors in two sets, values and indices, and allows both independent and combined compression of these sets.
Our experiments with large real models demonstrate that DeepReduce transmits fewer data and imposes lower computational overhead than existing methods.
arXiv Detail & Related papers (2021-02-05T11:31:24Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.