TTNOpt: Tree tensor network package for high-rank tensor compression
- URL: http://arxiv.org/abs/2505.05908v1
- Date: Fri, 09 May 2025 09:28:38 GMT
- Title: TTNOpt: Tree tensor network package for high-rank tensor compression
- Authors: Ryo Watanabe, Hidetaka Manabe, Toshiya Hikihara, Hiroshi Ueda,
- Abstract summary: TTNOpt is a software package that utilizes tree tensor networks (TTNs) for quantum spin systems and high-dimensional data analysis.<n>For quantum spin systems, TTNOpt searches for the ground state of Hamiltonians with bilinear spin interactions and magnetic fields.<n>For high-dimensional data analysis, TTNOpt factorizes complex tensors into TTN states that maximize fidelity to the original tensors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We have developed TTNOpt, a software package that utilizes tree tensor networks (TTNs) for quantum spin systems and high-dimensional data analysis. TTNOpt provides efficient and powerful TTN computations by locally optimizing the network structure, guided by the entanglement pattern of the target tensors. For quantum spin systems, TTNOpt searches for the ground state of Hamiltonians with bilinear spin interactions and magnetic fields, and computes physical properties of these states, including the variational energy, bipartite entanglement entropy (EE), single-site expectation values, and two-site correlation functions. Additionally, TTNOpt can target the lowest-energy state within a specified subspace, provided that the Hamiltonian conserves total magnetization. For high-dimensional data analysis, TTNOpt factorizes complex tensors into TTN states that maximize fidelity to the original tensors by optimizing the tensors and the network. When a TTN is provided as input, TTNOpt reconstructs the network based on the EE without referencing the fidelity of the original state. We present three demonstrations of TTNOpt: (1) Ground-state search for the hierarchical chain model with a system size of $256$. The entanglement patterns of the ground state manifest themselves in a tree structure, and TTNOpt successfully identifies the tree. (2) Factorization of a quantic tensor of the $2^{24}$ dimensions representing a three-variable function where each variant has a weak bit-wise correlation. The optimized TTN shows that its structure isolates the variables from each other. (3) Reconstruction of the matrix product network representing a $16$-variable normal distribution characterized by a tree-like correlation structure. TTNOpt can reveal hidden correlation structures of the covariance matrix.
Related papers
- The Augmented Tree Tensor Network Cookbook [0.0]
An augmented tree tensor network (aTTN) is a tensor network ansatz constructed by applying a layer of unitary disentanglers to a tree tensor network.<n>These lecture notes serve as a detailed guide for implementing the aTTN algorithms.
arXiv Detail & Related papers (2025-07-28T18:00:39Z) - Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [63.945006006152035]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Efficient Prediction of SO(3)-Equivariant Hamiltonian Matrices via SO(2) Local Frames [59.87385171177885]
We consider the task of predicting Hamiltonian matrices to accelerate electronic structure calculations.<n>Motivated by the inherent relationship between the off-diagonal blocks of the Hamiltonian matrix and the SO(2) local frame, we propose QHNetV2.
arXiv Detail & Related papers (2025-06-11T05:04:29Z) - Neuralized Fermionic Tensor Networks for Quantum Many-Body Systems [0.0]
We describe a class of neuralized fermionic tensor network states (NN-fTNS)<n>NN-fTNS introduce non-linearity into fermionic tensor networks through configuration-dependent neural network transformations of the local tensors.<n>Compared to existing fermionic neural quantum states (NQS), NN-fTNS offer a physically motivated alternative fermionic structure.
arXiv Detail & Related papers (2025-06-10T01:33:58Z) - Tree tensor network hierarchical equations of motion based on time-dependent variational principle for efficient open quantum dynamics in structured thermal environments [0.0]
We introduce an efficient method TTN-HEOM for exactly calculating the open quantum dynamics for driven quantum systems interacting with bosonic baths.<n>We implement three general propagators for the coupled master equations.<n>Our results show that the TTN-HEOM is capable to simulate both dephasing and relaxation dynamics of quantum driven system interacting with structured baths.
arXiv Detail & Related papers (2025-04-30T18:48:05Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Message-Passing Neural Quantum States for the Homogeneous Electron Gas [41.94295877935867]
We introduce a message-passing-neural-network-based wave function Ansatz to simulate extended, strongly interacting fermions in continuous space.
We demonstrate its accuracy by simulating the ground state of the homogeneous electron gas in three spatial dimensions.
arXiv Detail & Related papers (2023-05-12T04:12:04Z) - Isometric tensor network optimization for extensive Hamiltonians is free
of barren plateaus [0.0]
We show that there are no barren plateaus in the energy optimization of isometric tensor network states (TNS)
TNS are a promising route for an efficient quantum-computation-based investigation of strongly-correlated quantum matter.
arXiv Detail & Related papers (2023-04-27T16:45:57Z) - Entanglement bipartitioning and tree tensor networks [0.0]
We propose an entanglement bipartitioning approach to design an optimal network structure of the tree-tensor-network (TTN) for quantum many-body systems.
We demonstrate that entanglement bipartitioning of up to 16 sites gives rise to nontrivial tree network structures for $S=1/2$ Heisenberg models in one and two dimensions.
arXiv Detail & Related papers (2022-10-21T05:36:03Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - From Tree Tensor Network to Multiscale Entanglement Renormalization
Ansatz [0.0]
We introduce a new Tree Network (TTN) based TNS dubbed as Fully- Augmented Tree Network (FATTN) by releasing the constraint in Augmented Tree Network (ATTN)
When disentanglers are augmented in the physical layer of TTN, FATTN can provide more entanglement than TTN and ATTN.
Benchmark results on the ground state energy for the transverse Ising model are provided to demonstrate the improvement of accuracy of FATTN over TTN and ATTN.
arXiv Detail & Related papers (2021-10-17T11:16:38Z) - Semi-tensor Product-based TensorDecomposition for Neural Network
Compression [57.95644775091316]
This paper generalizes classical matrix product-based mode product to semi-tensor mode product.
As it permits the connection of two factors with different dimensionality, more flexible and compact tensor decompositions can be obtained.
arXiv Detail & Related papers (2021-09-30T15:18:14Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.