Gaussian-boson-sampling-enhanced dense subgraph finding shows limited
advantage over efficient classical algorithms
- URL: http://arxiv.org/abs/2301.13217v1
- Date: Mon, 30 Jan 2023 19:00:03 GMT
- Title: Gaussian-boson-sampling-enhanced dense subgraph finding shows limited
advantage over efficient classical algorithms
- Authors: Naomi R. Solomons, Oliver F. Thomas, Dara P. S. McCutcheon
- Abstract summary: We investigate the effects of sources of error including loss and spectral impurity on GBS applied to dense subgraph finding algorithms.
We find that the effectiveness of these algorithms is remarkably robust to errors, to such an extent that there exist efficient classical algorithms that can simulate the underlying GBS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent claims of achieving exponential quantum advantage have attracted
attention to Gaussian boson sampling (GBS), a potential application of which is
dense subgraph finding. We investigate the effects of sources of error
including loss and spectral impurity on GBS applied to dense subgraph finding
algorithms. We find that the effectiveness of these algorithms is remarkably
robust to errors, to such an extent that there exist efficient classical
algorithms that can simulate the underlying GBS. These results imply that the
speedup of GBS-based algorithms for the dense subgraph problem over classical
approaches is at most polynomial, though this could be achieved on a quantum
device with dramatically less stringent requirements on loss and photon purity
than general GBS.
Related papers
- Classical modelling of a lossy Gaussian bosonic sampler [0.0]
We propose an algorithm for approximate classical simulation of a lossy GBS instance.
The complexity of the algorithm is squeezing the number of modes given the number of terms is fixed.
In recent experiments that claim to have demonstrated quantum advantage, these conditions are satisfied.
arXiv Detail & Related papers (2024-04-01T09:19:27Z) - Gaussian Boson Sampling to Accelerate NP-Complete Vertex-Minor Graph
Classification [0.9935277311162707]
We propose a hybrid quantum-classical algorithm for the NP-complete problem of determining if two graphs are minor of one another.
We find a graph embedding that allows trading between the one-shot classification accuracy and the amount of input squeezing.
We introduce a new classical algorithm based on graph spectra, which we show outperforms various well-known graph-similarity algorithms.
arXiv Detail & Related papers (2024-02-05T21:24:11Z) - Quantum Bayesian Optimization [64.58749619145908]
We introduce the quantum-Gaussian process-upper confidence bound (Q-GP-UCB) algorithm.
It is the first BO algorithm able to achieve a regret upper bound of O(polylog T), which is significantly smaller than its regret lower bound of Omega(sqrt(T)) in the classical setting.
Thanks to our novel analysis of the confidence ellipsoid, our Q-GP-UCB with the linear kernel achieves a smaller regret than the quantum linear UCB algorithm.
arXiv Detail & Related papers (2023-10-09T03:10:42Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Solving Graph Problems Using Gaussian Boson Sampling [22.516585968074146]
We use a noisy intermediate-scale quantum computer to solve graph problems.
We experimentally observe the presence of GBS enhancement with large photon-click number and an enhancement under certain noise.
Our work is a step toward testing real-world problems using the existing intermediate-scale quantum computers.
arXiv Detail & Related papers (2023-02-02T08:25:47Z) - Experimentally finding dense subgraphs using a time-bin encoded Gaussian
boson sampling device [0.0]
We use a time-bin encoded interferometer to implement GBS experimentally and extract samples to enhance the search for dense subgraphs in a graph.
Our results indicate an improvement over classical methods for subgraphs of sizes three and four in a graph containing ten nodes.
We numerically explore the role of imperfections in the optical circuit and on the performance of the algorithm.
arXiv Detail & Related papers (2022-04-11T16:56:30Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Gaussian boson sampling with partial distinguishability [0.0]
We investigate GBS with partial distinguishability using an approach based on virtual modes and indistinguishability efficiency.
We show how the boundary of quantum supremacy in GBS can be pushed further by partial distinguishability.
arXiv Detail & Related papers (2021-05-20T08:17:51Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.