Quadratic speedup for simulating Gaussian boson sampling
- URL: http://arxiv.org/abs/2010.15595v4
- Date: Tue, 3 Aug 2021 16:50:50 GMT
- Title: Quadratic speedup for simulating Gaussian boson sampling
- Authors: Nicol\'as Quesada, Rachel S. Chadwick, Bryn A. Bell, Juan Miguel
Arrazola, Trevor Vincent, Haoyu Qi, Ra\'ul Garc\'ia-Patr\'on
- Abstract summary: We introduce an algorithm for the classical simulation of Gaussian boson sampling that is quadratically faster than previously known methods.
The complexity of the algorithm is exponential in the number of photon pairs detected, not the number of photons.
We show that an improved loop hafnian algorithm can be used to compute pure-state probabilities without the need of a supercomputer.
- Score: 0.9236074230806577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an algorithm for the classical simulation of Gaussian boson
sampling that is quadratically faster than previously known methods. The
complexity of the algorithm is exponential in the number of photon pairs
detected, not the number of photons, and is directly proportional to the time
required to calculate a probability amplitude for a pure Gaussian state. The
main innovation is to use auxiliary conditioning variables to reduce the
problem of sampling to computing pure-state probability amplitudes, for which
the most computationally-expensive step is calculating a loop hafnian. We
implement and benchmark an improved loop hafnian algorithm and show that it can
be used to compute pure-state probabilities, the dominant step in the sampling
algorithm, of up to 50-photon events in a single workstation, i.e., without the
need of a supercomputer.
Related papers
- Classical algorithm for simulating experimental Gaussian boson sampling [2.1684911254068906]
Gaussian boson sampling is a promising candidate for showing experimental quantum advantage.
Despite a high photon loss rate and the presence of noise, they are currently claimed to be hard to classically simulate with the best-known classical algorithm.
We present a classical tensor-network algorithm that simulates Gaussian boson sampling and whose complexity can be significantly reduced when the photon loss rate is high.
arXiv Detail & Related papers (2023-06-06T14:19:48Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Classical simulation of boson sampling based on graph structure [2.5496329090462626]
We present classical sampling algorithms for single-photon and Gaussian input states that take advantage of a graph structure of a linear-optical circuit.
We show that when the circuit depth is less than the quadratic in the lattice spacing, the efficient simulation is possible with an exponentially small error.
We implement a likelihood test with a recent numerically Gaussian boson sampling experiment and show that the treewidth-based algorithm with a limited treewidth renders a larger likelihood than the experimental data.
arXiv Detail & Related papers (2021-10-04T17:02:35Z) - Polynomial speedup in Torontonian calculation by a scalable recursive
algorithm [0.0]
The Torontonian function is a central computational challenge in the simulation of Gaussian Boson Sampling (GBS) with threshold detection.
We propose a recursive algorithm providing a speedup in the exact calculation of the Torontonian compared to state-of-the-art algorithms.
We show that the algorithm can be scaled up to use cases making feasible the simulation of threshold GBS up to $35-40$ photon without the needs of large-scale computational capacities.
arXiv Detail & Related papers (2021-09-09T19:32:03Z) - Photonic co-processors in HPC: using LightOn OPUs for Randomized
Numerical Linear Algebra [53.13961454500934]
We show that the randomization step for dimensionality reduction may itself become the computational bottleneck on traditional hardware.
We show that randomization can be significantly accelerated, at negligible precision loss, in a wide range of important RandNLA algorithms.
arXiv Detail & Related papers (2021-04-29T15:48:52Z) - Quantum Algorithms for Prediction Based on Ridge Regression [0.7612218105739107]
We propose a quantum algorithm based on ridge regression model, which get the optimal fitting parameters.
The proposed algorithm has a wide range of application and the proposed algorithm can be used as a subroutine of other quantum algorithms.
arXiv Detail & Related papers (2021-04-27T11:03:52Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Learning with Optimized Random Features: Exponential Speedup by Quantum
Machine Learning without Sparsity and Low-Rank Assumptions [8.08640000394814]
We develop a quantum algorithm for sampling from an optimized distribution over runtime features.
Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task.
arXiv Detail & Related papers (2020-04-22T18:00:00Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - On Thompson Sampling with Langevin Algorithms [106.78254564840844]
Thompson sampling for multi-armed bandit problems enjoys favorable performance in both theory and practice.
It suffers from a significant limitation computationally, arising from the need for samples from posterior distributions at every iteration.
We propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling to address this issue.
arXiv Detail & Related papers (2020-02-23T22:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.