Variationally optimizing infinite projected entangled-pair states at large bond dimensions: A split-CTMRG approach
- URL: http://arxiv.org/abs/2502.10298v1
- Date: Fri, 14 Feb 2025 16:59:33 GMT
- Title: Variationally optimizing infinite projected entangled-pair states at large bond dimensions: A split-CTMRG approach
- Authors: Jan Naumann, Erik Lennart Weerda, Jens Eisert, Matteo Rizzi, Philipp Schmoll,
- Abstract summary: We introduce an alternative "split-CTMRG" algorithm, which maintains separate PEPS layers and leverages new environment tensors, reducing computational complexity while preserving accuracy.
Benchmarks on quantum lattice models demonstrate substantial speedups for variational energy optimization, rendering this method valuable for large-scale PEPS simulations.
- Score: 0.2796197251957244
- License:
- Abstract: Projected entangled-pair states (PEPS) have become a powerful tool for studying quantum many-body systems in the condensed matter and quantum materials context, particularly with advances in variational energy optimization methods. A key challenge within this framework is the computational cost associated with the contraction of the two-dimensional lattice, crucial for calculating state vector norms and expectation values. The conventional approach, using the corner transfer matrix renormalization group (CTMRG), involves combining two tensor network layers, resulting in significant time and memory demands. In this work, we introduce an alternative "split-CTMRG" algorithm, which maintains separate PEPS layers and leverages new environment tensors, reducing computational complexity while preserving accuracy. Benchmarks on quantum lattice models demonstrate substantial speedups for variational energy optimization, rendering this method valuable for large-scale PEPS simulations.
Related papers
- Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Scaling of contraction costs for entanglement renormalization algorithms including tensor Trotterization and variational Monte Carlo [0.0]
We investigate whether tensor Trotterization and/or variational Monte Carlo sampling can lead to quantum-inspired classical MERA algorithms.
Algorithmic phase diagrams indicate the best MERA method depending on the scaling of the energy accuracy and the optimal number of Trotter steps with the bond dimension.
arXiv Detail & Related papers (2024-07-30T17:54:15Z) - A self-consistent field approach for the variational quantum
eigensolver: orbital optimization goes adaptive [52.77024349608834]
We present a self consistent field approach (SCF) within the Adaptive Derivative-Assembled Problem-Assembled Ansatz Variational Eigensolver (ADAPTVQE)
This framework is used for efficient quantum simulations of chemical systems on nearterm quantum computers.
arXiv Detail & Related papers (2022-12-21T23:15:17Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models [9.727062803700264]
We introduce LUT-GEMM, an efficient kernel for quantized matrix multiplication.
LUT-GEMM eliminates the resource-intensive dequantization process and reduces computational costs.
We show experimentally that when applied to the OPT-175B model with 3-bit quantization, LUT-GEMM substantially accelerates token generation latency.
arXiv Detail & Related papers (2022-06-20T03:48:17Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Reduced Density Matrix Sampling: Self-consistent Embedding and
Multiscale Electronic Structure on Current Generation Quantum Computers [1.3488660476261511]
We investigate fully self-consistent multiscale quantum-classical algorithms on current generation superconducting quantum computers.
We show that these self-consistent algorithms are indeed highly robust, even in the presence of significant noises on quantum hardware.
arXiv Detail & Related papers (2021-04-12T14:57:51Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Efficient 2D Tensor Network Simulation of Quantum Systems [6.074275058563179]
2D tensor networks such as Projected Entangled States (PEPS) are well-suited for key classes of physical systems and quantum circuits.
We propose new algorithms and software abstractions for PEPS-based methods, accelerating the bottleneck operation of contraction and scalableization of a subnetwork.
arXiv Detail & Related papers (2020-06-26T22:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.