Jet: Fast quantum circuit simulations with parallel task-based
tensor-network contraction
- URL: http://arxiv.org/abs/2107.09793v3
- Date: Sat, 30 Apr 2022 22:55:20 GMT
- Title: Jet: Fast quantum circuit simulations with parallel task-based
tensor-network contraction
- Authors: Trevor Vincent, Lee J. O'Riordan, Mikhail Andrenkov, Jack Brown,
Nathan Killoran, Haoyu Qi, and Ish Dhand
- Abstract summary: We introduce a new open-source software library Jet, which uses task-based parallelism to obtain speed-ups in quantum circuits.
These speed-ups result from i) the increased parallelism introduced by mapping the tensor-network simulation to a task-based framework, and ii) a novel method of reusing shared work between tensor-network tasks.
- Score: 0.8431877864777442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new open-source software library Jet, which uses task-based
parallelism to obtain speed-ups in classical tensor-network simulations of
quantum circuits. These speed-ups result from i) the increased parallelism
introduced by mapping the tensor-network simulation to a task-based framework,
ii) a novel method of reusing shared work between tensor-network contraction
tasks, and iii) the concurrent contraction of tensor networks on all available
hardware. We demonstrate the advantages of our method by benchmarking our code
on several Sycamore-53 and Gaussian boson sampling (GBS) supremacy circuits
against other simulators. We also provide and compare theoretical performance
estimates for tensor-network simulations of Sycamore-53 and GBS supremacy
circuits for the first time.
Related papers
- Distributed Tensor Network Library for Quantum Computing Emulation [0.0]
HPC tensor network packages tackle this issue with a procedure called circuit slicing.<n>We present a novel alternative approach, where individual tensors are both broadcast and scattered.<n>We showcase its capabilities on ARCHER2, by emulating two well-known algorithms.
arXiv Detail & Related papers (2025-05-09T15:17:42Z) - Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - State of practice: evaluating GPU performance of state vector and tensor
network methods [2.7930955543692817]
This article investigates the limits of current state-of-the-art simulation techniques on a test bench made of eight widely used quantum subroutines.
We highlight how to select the best simulation strategy, obtaining a speedup of up to an order of magnitude.
arXiv Detail & Related papers (2024-01-11T09:22:21Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - Simulation Paths for Quantum Circuit Simulation with Decision Diagrams [72.03286471602073]
We study the importance of the path that is chosen when simulating quantum circuits using decision diagrams.
We propose an open-source framework that allows to investigate dedicated simulation paths.
arXiv Detail & Related papers (2022-03-01T19:00:11Z) - Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network
Pruning [50.232218214751455]
optimal network pruning is a non-trivial task which mathematically is an NP-hard problem.
In this paper, we investigate the Magnitude-Based Pruning (MBP) scheme and analyze it from a novel perspective.
We also propose a novel two-stage pruning approach, where one stage is to obtain the topological structure of the pruned network and the other stage is to retrain the pruned network to recover the capacity.
arXiv Detail & Related papers (2022-01-30T03:42:36Z) - Parallel Simulation of Quantum Networks with Distributed Quantum State
Management [56.24769206561207]
We identify requirements for parallel simulation of quantum networks and develop the first parallel discrete event quantum network simulator.
Our contributions include the design and development of a quantum state manager that maintains shared quantum information distributed across multiple processes.
We release the parallel SeQUeNCe simulator as an open-source tool alongside the existing sequential version.
arXiv Detail & Related papers (2021-11-06T16:51:17Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Simple heuristics for efficient parallel tensor contraction and quantum
circuit simulation [1.4416132811087747]
We propose a parallel algorithm for the contraction of tensor networks using probabilistic models.
We apply the resulting algorithm to the simulation of random quantum circuits.
arXiv Detail & Related papers (2020-04-22T23:00:42Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z) - Hyper-optimized tensor network contraction [0.0]
We implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks.
We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips.
The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems.
arXiv Detail & Related papers (2020-02-05T19:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.