Automatic Structural Search of Tensor Network States including Entanglement Renormalization
- URL: http://arxiv.org/abs/2405.06534v1
- Date: Fri, 10 May 2024 15:24:10 GMT
- Title: Automatic Structural Search of Tensor Network States including Entanglement Renormalization
- Authors: Ryo Watanabe, Hiroshi Ueda,
- Abstract summary: Network states, including entanglement renormalization, can encompass a wider variety of entangled states.
A proposal has yet to show a structural search of ER due to its high computational cost and the lack of flexibility in its algorithm.
In this study, we conducted an optimal structural search of TN, including ER, based on the reconstruction of their local structures with respect to variational energy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor network (TN) states, including entanglement renormalization (ER), can encompass a wider variety of entangled states. When the entanglement structure of the quantum state of interest is non-uniform in real space, accurately representing the state with a limited number of degrees of freedom hinges on appropriately configuring the TN to align with the entanglement pattern. However, a proposal has yet to show a structural search of ER due to its high computational cost and the lack of flexibility in its algorithm. In this study, we conducted an optimal structural search of TN, including ER, based on the reconstruction of their local structures with respect to variational energy. Firstly, we demonstrated that our algorithm for the spin-$1/2$ tetramer singlets model could calculate exact ground energy using the multi-scale entanglement renormalization ansatz (MERA) structure as an initial TN structure. Subsequently, we applied our algorithm to the random XY models with the two initial structures: MERA and the suitable structure underlying the strong disordered renormalization group. We found that, in both cases, our algorithm achieves improvements in variational energy, fidelity, and entanglement entropy. The degree of improvement in these quantities is superior in the latter case compared to the former, suggesting that utilizing an existing TN design method as a preprocessing step is important for maximizing our algorithm's performance.
Related papers
- Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices [88.33936714942996]
We present a unifying framework that enables searching among all linear operators expressible via an Einstein summation.
We show that differences in the compute-optimal scaling laws are mostly governed by a small number of variables.
We find that Mixture-of-Experts (MoE) learns an MoE in every single linear layer of the model, including the projection in the attention blocks.
arXiv Detail & Related papers (2024-10-03T00:44:50Z) - Hybrid Ground-State Quantum Algorithms based on Neural Schrödinger Forging [0.0]
Entanglement forging based variational algorithms leverage the bi- partition of quantum systems.
We propose a new method for entanglement forging employing generative neural networks to identify the most pertinent bitstrings.
We show that the proposed algorithm achieves comparable or superior performance compared to the existing standard implementation of entanglement forging.
arXiv Detail & Related papers (2023-07-05T20:06:17Z) - Variational quantum eigensolver with embedded entanglement using a tensor-network ansatz [0.8009842832476994]
We introduce a tensor network (TN) scheme into the entanglement augmentation process of the synergistic optimization framework.
We show that the framework can be used to build its process systematically for inhomogeneous systems.
The improvement of entanglements for MERA in all-to-all coupled inhomogeneous systems, enhancement, and potential synergistic applications are also discussed.
arXiv Detail & Related papers (2023-05-11T02:45:52Z) - Automatic structural optimization of tree tensor networks [0.0]
We propose a TTN algorithm that enables us to automatically optimize the network structure by local reconnections of isometries.
We demonstrate that the entanglement structure embedded in the ground-state of the system can be efficiently visualized as a perfect binary tree in the optimized TTN.
arXiv Detail & Related papers (2022-09-07T14:51:39Z) - Permutation Search of Tensor Network Structures via Local Sampling [27.155329364896144]
In this paper, we consider a practical variant of TN-SS, dubbed TN permutation search (TN-PS)
We propose a practically-efficient algorithm to resolve the problem of TN-PS.
Numerical results demonstrate that the new algorithm can reduce the required model size of TNs in extensive benchmarks.
arXiv Detail & Related papers (2022-06-14T05:12:49Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - EBM-Fold: Fully-Differentiable Protein Folding Powered by Energy-based
Models [53.17320541056843]
We propose a fully-differentiable approach for protein structure optimization, guided by a data-driven generative network.
Our EBM-Fold approach can efficiently produce high-quality decoys, compared against traditional Rosetta-based structure optimization routines.
arXiv Detail & Related papers (2021-05-11T03:40:29Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - RNA Secondary Structure Prediction By Learning Unrolled Algorithms [70.09461537906319]
In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction.
The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled algorithm for constrained programming as the template for deep architectures to enforce constraints.
With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold.
arXiv Detail & Related papers (2020-02-13T23:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.