From Tree Tensor Network to Multiscale Entanglement Renormalization
Ansatz
- URL: http://arxiv.org/abs/2110.08794v2
- Date: Sun, 26 Jun 2022 02:35:02 GMT
- Title: From Tree Tensor Network to Multiscale Entanglement Renormalization
Ansatz
- Authors: Xiangjian Qian and Mingpu Qin
- Abstract summary: We introduce a new Tree Network (TTN) based TNS dubbed as Fully- Augmented Tree Network (FATTN) by releasing the constraint in Augmented Tree Network (ATTN)
When disentanglers are augmented in the physical layer of TTN, FATTN can provide more entanglement than TTN and ATTN.
Benchmark results on the ground state energy for the transverse Ising model are provided to demonstrate the improvement of accuracy of FATTN over TTN and ATTN.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor Network States (TNS) offer an efficient representation for the ground
state of quantum many body systems and play an important role in the
simulations of them. Numerous TNS are proposed in the past few decades.
However, due to the high cost of TNS for two-dimensional systems, a balance
between the encoded entanglement and computational complexity of TNS is yet to
be reached. In this work we introduce a new Tree Tensor Network (TTN) based TNS
dubbed as Fully- Augmented Tree Tensor Network (FATTN) by releasing the
constraint in Augmented Tree Tensor Network (ATTN). When disentanglers are
augmented in the physical layer of TTN, FATTN can provide more entanglement
than TTN and ATTN. At the same time, FATTN maintains the scaling of
computational cost with bond dimension in TTN and ATTN. Benchmark results on
the ground state energy for the transverse Ising model are provided to
demonstrate the improvement of accuracy of FATTN over TTN and ATTN. Moreover,
FATTN is quite flexible which can be constructed as an interpolation between
Tree Tensor Network and Multiscale Entanglement Renormalization Ansatz (MERA)
to reach a balance between the encoded entanglement and the computational cost.
Related papers
- LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Entanglement bipartitioning and tree tensor networks [0.0]
We propose an entanglement bipartitioning approach to design an optimal network structure of the tree-tensor-network (TTN) for quantum many-body systems.
We demonstrate that entanglement bipartitioning of up to 16 sites gives rise to nontrivial tree network structures for $S=1/2$ Heisenberg models in one and two dimensions.
arXiv Detail & Related papers (2022-10-21T05:36:03Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - Tensor Network States with Low-Rank Tensors [6.385624548310884]
We introduce the idea of imposing low-rank constraints on the tensors that compose the tensor network.
With this modification, the time and complexities for the network optimization can be substantially reduced.
We find that choosing the tensor rank $r$ to be on the order of the bond $m$, is sufficient to obtain high-accuracy groundstate approximations.
arXiv Detail & Related papers (2022-05-30T17:58:16Z) - Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
Riemannian Gradient Descent With Illustrations of Speech Processing [74.31472195046099]
We exploit a low-rank tensor-train deep neural network (TT-DNN) to build an end-to-end deep learning pipeline, namely LR-TT-DNN.
A hybrid model combining LR-TT-DNN with a convolutional neural network (CNN) is set up to boost the performance.
Our empirical evidence demonstrates that the LR-TT-DNN and CNN+(LR-TT-DNN) models with fewer model parameters can outperform the TT-DNN and CNN+(LR-TT-DNN) counterparts.
arXiv Detail & Related papers (2022-03-11T15:55:34Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - Block-term Tensor Neural Networks [29.442026567710435]
We show that block-term tensor layers (BT-layers) can be easily adapted to neural network models, such as CNNs and RNNs.
BT-layers in CNNs and RNNs can achieve a very large compression ratio on the number of parameters while preserving or improving the representation power of the original DNNs.
arXiv Detail & Related papers (2020-10-10T09:58:43Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Randomly Weighted, Untrained Neural Tensor Networks Achieve Greater
Relational Expressiveness [3.5408022972081694]
We propose Randomly Weighted Networks (RWTNs), which incorporate randomly drawn untrained tensors into a network with a trained decoder network.
We show that RWTNs meet or surpass the performance of traditionally trained LTNs for Image Interpretation (SIITNs)
We demonstrate that RWTNs can achieve similar performance as LTNs for object classification while using fewer parameters for learning.
arXiv Detail & Related papers (2020-06-01T19:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.