YASTN: Yet another symmetric tensor networks; A Python library for abelian symmetric tensor network calculations
- URL: http://arxiv.org/abs/2405.12196v1
- Date: Mon, 20 May 2024 17:32:16 GMT
- Title: YASTN: Yet another symmetric tensor networks; A Python library for abelian symmetric tensor network calculations
- Authors: Marek M. Rams, Gabriela Wójtowicz, Aritra Sinha, Juraj Hasik,
- Abstract summary: We present an open-source tensor network Python library for quantum many-body simulations.
At its core is an abelian-symmetric tensor, implemented as a sparse block structure managed by logical layer on top of dense multi-dimensional array backend.
We show the library performance in simulations with infinite projected entangled-pair states, such as finding the ground states with AD, or simulating thermal states of the Hubbard model via imaginary time evolution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present an open-source tensor network Python library for quantum many-body simulations. At its core is an abelian-symmetric tensor, implemented as a sparse block structure managed by logical layer on top of dense multi-dimensional array backend. This serves as the basis for higher-level tensor networks algorithms, operating on matrix product states and projected entangled pair states, implemented here. Using appropriate backend, such as PyTorch, gives direct access to automatic differentiation (AD) for cost-function gradient calculations and execution on GPUs or other supported accelerators. We show the library performance in simulations with infinite projected entangled-pair states, such as finding the ground states with AD, or simulating thermal states of the Hubbard model via imaginary time evolution. We quantify sources of performance gains in those challenging examples allowed by utilizing symmetries.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Fermionic tensor network methods [0.0]
We show how fermionic statistics can be naturally incorporated in tensor networks on arbitrary graphs through the use of graded Hilbert spaces.
This formalism allows to use tensor network methods for fermionic lattice systems in a local way, avoiding the need of a Jordan-Wigner transformation or the explicit tracking of leg crossings by swap gates in 2D tensor networks.
arXiv Detail & Related papers (2024-04-22T22:22:05Z) - Boosting the effective performance of massively parallel tensor network
state algorithms on hybrid CPU-GPU based architectures via non-Abelian
symmetries [0.0]
Non-Abelian symmetry related tensor algebra based on Wigner-Eckhart theorem is fully detached from the conventional tensor network layer.
We have achieved an order of magnitude increase in performance with respect to results reported in arXiv:2305.05581 in terms of computational complexity.
Our solution has an estimated effective performance of 250-500 TFLOPS.
arXiv Detail & Related papers (2023-09-23T07:49:53Z) - One-step replica symmetry breaking in the language of tensor networks [0.913755431537592]
We develop an exact mapping between the one-step replica symmetry breaking cavity method and tensor networks.
The two schemes come with complementary mathematical and numerical toolboxes that could be leveraged to improve the respective states of the art.
arXiv Detail & Related papers (2023-06-26T18:42:51Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - RosneT: A Block Tensor Algebra Library for Out-of-Core Quantum Computing
Simulation [0.18472148461613155]
We present RosneT, a library for distributed, out-of-core block tensor algebra.
We use the PyCOMPSs programming model to transform tensor operations into a collection of tasks handled by the COMPSs runtime.
We report results validating our approach showing good scalability in simulations of Quantum circuits of up to 53 qubits.
arXiv Detail & Related papers (2022-01-17T20:35:40Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - TensorLy-Quantum: Quantum Machine Learning with Tensor Methods [67.29221827422164]
We create a Python library for quantum circuit simulation that adopts the PyTorch API.
Ly-Quantum can scale to hundreds of qubits on a single GPU and thousands of qubits on multiple GPU.
arXiv Detail & Related papers (2021-12-19T19:26:17Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Geomstats: A Python Package for Riemannian Geometry in Machine Learning [5.449970675406181]
We introduce Geomstats, an open-source Python toolbox for computations and statistics on nonlinear equations.
We provide object-oriented and extensively unit-tested implementations.
We show that Geomstats provides reliable building blocks to foster research in differential geometry and statistics.
The source code is freely available under the MIT license at urlgeomstats.ai.
arXiv Detail & Related papers (2020-04-07T20:41:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.