Sampling two-dimensional isometric tensor network states
- URL: http://arxiv.org/abs/2602.02245v1
- Date: Mon, 02 Feb 2026 15:54:25 GMT
- Title: Sampling two-dimensional isometric tensor network states
- Authors: Alec Dektor, Eugene Dumitrescu, Chao Yang,
- Abstract summary: We introduce two novel sampling algorithms for two-dimensional (2D) isometric tensor network states (isoTNS)<n>The first algorithm performs independent sampling and yields a single configuration together with its associated probability.<n>The second algorithm employs a greedy search strategy to identify K high-probability configurations and their corresponding probabilities.
- Score: 2.9461551992891057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sampling a quantum systems underlying probability distributions is an important computational task, e.g., for quantum advantage experiments and quantum Monte Carlo algorithms. Tensor networks are an invaluable tool for efficiently representing states of large quantum systems with limited entanglement. Algorithms for sampling one-dimensional (1D) tensor networks are well-established and utilized in several 1D tensor network methods. In this paper we introduce two novel sampling algorithms for two-dimensional (2D) isometric tensor network states (isoTNS) that can be viewed as extensions of algorithms for 1D tensor networks. The first algorithm we propose performs independent sampling and yields a single configuration together with its associated probability. The second algorithm employs a greedy search strategy to identify K high-probability configurations and their corresponding probabilities. Numerical results demonstrate the effectiveness of these algorithms across quantum states with varying entanglement and system size.
Related papers
- Tensor Network Formulation of Dequantized Algorithms for Ground State Energy Estimation [2.9436347471485558]
Dequantization algorithms play a central role in providing a clear theoretical framework to separate complexity of quantum and classical algorithms.<n>Existing dequantized algorithms typically rely on sampling procedures, leading to prohibitively large computational overheads.<n>We propose a tensor network-based dequantization framework for GSEE that eliminates the sampling process while preserving the complexity of prior dequantized algorithms.
arXiv Detail & Related papers (2025-12-15T17:07:04Z) - Tensor Network enhanced Dynamic Multiproduct Formulas [2.3249255788359813]
We introduce a novel algorithm that combines tensor networks and quantum computation to produce results more accurate than what could be achieved by either method used in isolation.
Our algorithm is based on multiproduct formulas (MPF) - a technique that linearly combines Trotter product formulas to reduce algorithmic error.
We present a detailed error analysis of the algorithm and demonstrate the full workflow on a one-dimensional quantum simulation problem on $50$ qubits using two IBM quantum computers.
arXiv Detail & Related papers (2024-07-24T16:37:35Z) - Multimodal deep representation learning for quantum cross-platform
verification [60.01590250213637]
Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms.
We introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities.
We devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation.
arXiv Detail & Related papers (2023-11-07T04:35:03Z) - Training Multi-layer Neural Networks on Ising Machine [41.95720316032297]
This paper proposes an Ising learning algorithm to train quantized neural network (QNN)
As far as we know, this is the first algorithm to train multi-layer feedforward networks on Ising machines.
arXiv Detail & Related papers (2023-11-06T04:09:15Z) - Quantum tensor network algorithms for evaluation of spectral functions on quantum computers [0.0]
We investigate quantum algorithms derived from tensor networks to simulate the static and dynamic properties of quantum many-body systems.<n>We demonstrate algorithms to prepare ground and excited states on a quantum computer and apply them to molecular nanomagnets (MNMs) as a paradigmatic example.
arXiv Detail & Related papers (2023-09-26T18:01:42Z) - Towards Symmetry-Aware Efficient Simulation of Quantum Systems and Beyond [12.07297035406401]
This Perspective argues that physics-informed tensor networks provide unifying strategies for scalable approaches in quantum simulation, computation, and machine learning.<n>The same principle extends to general symmetries, inspiring equivariant neural networks in machine learning and guiding symmetry-preserving ansatze in variational quantum algorithms.
arXiv Detail & Related papers (2023-03-20T19:33:13Z) - Block belief propagation algorithm for two-dimensional tensor networks [0.0]
We propose a block belief propagation algorithm for contracting two-dimensional tensor networks and approximating the ground state of $2D$ systems.
As applications, we use our algorithm to study the $2D$ Heisenberg and transverse Ising models, and show that the accuracy of the method is on par with state-of-the-art results.
arXiv Detail & Related papers (2023-01-14T07:37:08Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Benchmarking Small-Scale Quantum Devices on Computing Graph Edit
Distance [52.77024349608834]
Graph Edit Distance (GED) measures the degree of (dis)similarity between two graphs in terms of the operations needed to make them identical.
In this paper we present a comparative study of two quantum approaches to computing GED.
arXiv Detail & Related papers (2021-11-19T12:35:26Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Efficient 2D Tensor Network Simulation of Quantum Systems [6.074275058563179]
2D tensor networks such as Projected Entangled States (PEPS) are well-suited for key classes of physical systems and quantum circuits.
We propose new algorithms and software abstractions for PEPS-based methods, accelerating the bottleneck operation of contraction and scalableization of a subnetwork.
arXiv Detail & Related papers (2020-06-26T22:36:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.