Variationally optimizing infinite projected entangled-pair states at large bond dimensions: A split corner transfer matrix renormalization group approach
- URL: http://arxiv.org/abs/2502.10298v2
- Date: Sat, 26 Apr 2025 13:32:13 GMT
- Title: Variationally optimizing infinite projected entangled-pair states at large bond dimensions: A split corner transfer matrix renormalization group approach
- Authors: Jan Naumann, Erik Lennart Weerda, Jens Eisert, Matteo Rizzi, Philipp Schmoll,
- Abstract summary: We introduce an alternative "split-CTMRG" algorithm, which maintains separate PEPS layers and leverages new environment tensors, reducing computational complexity while preserving accuracy.<n> Benchmarks on quantum lattice models demonstrate substantial speedups for variational energy optimization, rendering this method valuable for large-scale PEPS simulations.
- Score: 0.2796197251957244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Projected entangled-pair states (PEPS) have become a powerful tool for studying quantum many-body systems in the condensed matter and quantum materials context, particularly with advances in variational energy optimization methods. A key challenge within this framework is the computational cost associated with the contraction of the two-dimensional lattice, crucial for calculating state vector norms and expectation values. The conventional approach, using the corner transfer matrix renormalization group (CTMRG), involves combining two tensor network layers, resulting in significant time and memory demands. In this work, we introduce an alternative "split-CTMRG" algorithm, which maintains separate PEPS layers and leverages new environment tensors, reducing computational complexity while preserving accuracy. Benchmarks on quantum lattice models demonstrate substantial speedups for variational energy optimization, rendering this method valuable for large-scale PEPS simulations.
Related papers
- BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)
We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.
Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Scaling of contraction costs for entanglement renormalization algorithms including tensor Trotterization and variational Monte Carlo [0.0]
We investigate whether tensor Trotterization and/or variational Monte Carlo sampling can lead to quantum-inspired classical MERA algorithms.<n>Algorithmic phase diagrams indicate the best MERA method depending on the scaling of the energy accuracy and the optimal number of Trotter steps with the bond dimension.
arXiv Detail & Related papers (2024-07-30T17:54:15Z) - CBQ: Cross-Block Quantization for Large Language Models [66.82132832702895]
Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.
We propose CBQ, a cross-block reconstruction-based PTQ method for LLMs.
CBQ employs a cross-block dependency using a reconstruction scheme, establishing long-range dependencies across multiple blocks to minimize error accumulation.
arXiv Detail & Related papers (2023-12-13T07:56:27Z) - Two dimensional quantum lattice models via mode optimized hybrid CPU-GPU density matrix renormalization group method [0.0]
We present a hybrid numerical approach to simulate quantum many body problems on two spatial dimensional quantum lattice models.
We demonstrate for the two dimensional spinless fermion model and for the Hubbard model on torus geometry that several orders of magnitude in computational time can be saved.
arXiv Detail & Related papers (2023-11-23T17:07:47Z) - An introduction to infinite projected entangled-pair state methods for variational ground state simulations using automatic differentiation [0.2796197251957244]
tensor networks capture large classes of ground states of phases of quantum matter faithfully and efficiently.
In recent years, multiple proposals for the variational optimization of the quantum state have been put forward.
We review the state-of-the-art of the variational iPEPS framework, providing a detailed introduction to automatic differentiation.
arXiv Detail & Related papers (2023-08-23T18:03:14Z) - A self-consistent field approach for the variational quantum
eigensolver: orbital optimization goes adaptive [52.77024349608834]
We present a self consistent field approach (SCF) within the Adaptive Derivative-Assembled Problem-Assembled Ansatz Variational Eigensolver (ADAPTVQE)
This framework is used for efficient quantum simulations of chemical systems on nearterm quantum computers.
arXiv Detail & Related papers (2022-12-21T23:15:17Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - Towards Mixed-Precision Quantization of Neural Networks via Constrained
Optimization [28.76708310896311]
We present a principled framework to solve the mixed-precision quantization problem.
We show that our method is derived in a principled way and much more computationally efficient.
arXiv Detail & Related papers (2021-10-13T08:09:26Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Efficient 2D Tensor Network Simulation of Quantum Systems [6.074275058563179]
2D tensor networks such as Projected Entangled States (PEPS) are well-suited for key classes of physical systems and quantum circuits.
We propose new algorithms and software abstractions for PEPS-based methods, accelerating the bottleneck operation of contraction and scalableization of a subnetwork.
arXiv Detail & Related papers (2020-06-26T22:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.