Entanglement bipartitioning and tree tensor networks
- URL: http://arxiv.org/abs/2210.11741v2
- Date: Thu, 5 Jan 2023 06:22:36 GMT
- Title: Entanglement bipartitioning and tree tensor networks
- Authors: Kouichi Okunishi, Hiroshi Ueda, Tomotoshi Nishino
- Abstract summary: We propose an entanglement bipartitioning approach to design an optimal network structure of the tree-tensor-network (TTN) for quantum many-body systems.
We demonstrate that entanglement bipartitioning of up to 16 sites gives rise to nontrivial tree network structures for $S=1/2$ Heisenberg models in one and two dimensions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the entanglement bipartitioning approach to design an optimal
network structure of the tree-tensor-network (TTN) for quantum many-body
systems. Given an exact ground-state wavefunction, we perform sequential
bipartitioning of spin-cluster nodes so as to minimize the mutual information
or the maximum loss of the entanglement entropy associated with the branch to
be bipartitioned. We demonstrate that entanglement bipartitioning of up to 16
sites gives rise to nontrivial tree network structures for $S=1/2$ Heisenberg
models in one and two dimensions. The resulting TTNs enable us to obtain better
variational energies, compared with standard TTNs such as uniform matrix
product state and perfect-binary-tree tensor network.
Related papers
- Optimal Tree Tensor Network Operators for Tensor Network Simulations: Applications to Open Quantum Systems [0.0]
Tree tensor network states (TTNS) decompose the system wavefunction to the product of low-rank tensors.
We present an algorithm that automatically constructs the optimal and exact tree tensor network operators (TTNO) for any sum-of-product symbolic quantum operator.
arXiv Detail & Related papers (2024-07-18T02:15:52Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Visualization of Entanglement Geometry by Structural Optimization of
Tree Tensor Network [0.0]
We propose a structural optimization algorithm for tree-tensor networks.
We show that the algorithm can successfully visualize the spatial pattern of spin-singlet pairs in the ground state.
arXiv Detail & Related papers (2024-01-29T09:39:24Z) - Hierarchical Multi-Marginal Optimal Transport for Network Alignment [52.206006379563306]
Multi-network alignment is an essential prerequisite for joint learning on multiple networks.
We propose a hierarchical multi-marginal optimal transport framework named HOT for multi-network alignment.
Our proposed HOT achieves significant improvements over the state-of-the-art in both effectiveness and scalability.
arXiv Detail & Related papers (2023-10-06T02:35:35Z) - On Optimizing the Communication of Model Parallelism [74.15423270435949]
We study a novel and important communication pattern in large-scale model-parallel deep learning (DL)
In cross-mesh resharding, a sharded tensor needs to be sent from a source device mesh to a destination device mesh.
We propose two contributions to address cross-mesh resharding: an efficient broadcast-based communication system, and an "overlapping-friendly" pipeline schedule.
arXiv Detail & Related papers (2022-11-10T03:56:48Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Efficient Simulation of Dynamics in Two-Dimensional Quantum Spin Systems
with Isometric Tensor Networks [0.0]
We investigate the computational power of the recently introduced class of isometric tensor network states (isoTNSs)
We discuss several technical details regarding the implementation of isoTNSs-based algorithms and compare different disentanglers.
We compute the dynamical spin structure factor of 2D quantum spin systems for two paradigmatic models.
arXiv Detail & Related papers (2021-12-15T19:00:05Z) - From Tree Tensor Network to Multiscale Entanglement Renormalization
Ansatz [0.0]
We introduce a new Tree Network (TTN) based TNS dubbed as Fully- Augmented Tree Network (FATTN) by releasing the constraint in Augmented Tree Network (ATTN)
When disentanglers are augmented in the physical layer of TTN, FATTN can provide more entanglement than TTN and ATTN.
Benchmark results on the ground state energy for the transverse Ising model are provided to demonstrate the improvement of accuracy of FATTN over TTN and ATTN.
arXiv Detail & Related papers (2021-10-17T11:16:38Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.