Quantum-inspired classical algorithm for graph problems by Gaussian
boson sampling
- URL: http://arxiv.org/abs/2302.00536v2
- Date: Sun, 23 Apr 2023 18:22:44 GMT
- Title: Quantum-inspired classical algorithm for graph problems by Gaussian
boson sampling
- Authors: Changhun Oh, Bill Fefferman, Liang Jiang, Nicol\'as Quesada
- Abstract summary: We present a quantum-inspired classical algorithm that can be used for graph-theoretical problems.
A given graph's adjacency matrix to be encoded in a Gaussian boson sampler is nonnegative, which does not necessitate quantum interference.
- Score: 2.5496329090462626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a quantum-inspired classical algorithm that can be used for
graph-theoretical problems, such as finding the densest $k$-subgraph and
finding the maximum weight clique, which are proposed as applications of a
Gaussian boson sampler. The main observation from Gaussian boson samplers is
that a given graph's adjacency matrix to be encoded in a Gaussian boson sampler
is nonnegative, which does not necessitate quantum interference. We first
provide how to program a given graph problem into our efficient classical
algorithm. We then numerically compare the performance of ideal and lossy
Gaussian boson samplers, our quantum-inspired classical sampler, and the
uniform sampler for finding the densest $k$-subgraph and finding the maximum
weight clique and show that the advantage from Gaussian boson samplers is not
significant in general. We finally discuss the potential advantage of a
Gaussian boson sampler over the proposed sampler.
Related papers
- Gaussian Boson Sampling to Accelerate NP-Complete Vertex-Minor Graph
Classification [0.9935277311162707]
We propose a hybrid quantum-classical algorithm for the NP-complete problem of determining if two graphs are minor of one another.
We find a graph embedding that allows trading between the one-shot classification accuracy and the amount of input squeezing.
We introduce a new classical algorithm based on graph spectra, which we show outperforms various well-known graph-similarity algorithms.
arXiv Detail & Related papers (2024-02-05T21:24:11Z) - Some Constructions of Private, Efficient, and Optimal $K$-Norm and Elliptic Gaussian Noise [54.34628844260993]
Differentially private computation often begins with a bound on some $d$-dimensional statistic's sensitivity.
For pure differential privacy, the $K$-norm mechanism can improve on this approach using a norm tailored to the statistic's sensitivity space.
This paper solves both problems for the simple statistics of sum, count, and vote.
arXiv Detail & Related papers (2023-09-27T17:09:36Z) - Classical algorithm for simulating experimental Gaussian boson sampling [2.1684911254068906]
Gaussian boson sampling is a promising candidate for showing experimental quantum advantage.
Despite a high photon loss rate and the presence of noise, they are currently claimed to be hard to classically simulate with the best-known classical algorithm.
We present a classical tensor-network algorithm that simulates Gaussian boson sampling and whose complexity can be significantly reduced when the photon loss rate is high.
arXiv Detail & Related papers (2023-06-06T14:19:48Z) - Solving Graph Problems Using Gaussian Boson Sampling [22.516585968074146]
We use a noisy intermediate-scale quantum computer to solve graph problems.
We experimentally observe the presence of GBS enhancement with large photon-click number and an enhancement under certain noise.
Our work is a step toward testing real-world problems using the existing intermediate-scale quantum computers.
arXiv Detail & Related papers (2023-02-02T08:25:47Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Experimentally finding dense subgraphs using a time-bin encoded Gaussian
boson sampling device [0.0]
We use a time-bin encoded interferometer to implement GBS experimentally and extract samples to enhance the search for dense subgraphs in a graph.
Our results indicate an improvement over classical methods for subgraphs of sizes three and four in a graph containing ten nodes.
We numerically explore the role of imperfections in the optical circuit and on the performance of the algorithm.
arXiv Detail & Related papers (2022-04-11T16:56:30Z) - Certification of Gaussian Boson Sampling via graph theory [4.063872661554895]
We exploit a connection between photon counting of a genuine Gaussian Boson Sampling device and the number of perfect matchings in a graph.
Within this framework, two approaches that exploit the distributions of graph feature vectors and graph kernels are presented.
arXiv Detail & Related papers (2022-02-15T20:22:28Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Nearly Linear Row Sampling Algorithm for Quantile Regression [54.75919082407094]
We give a row sampling algorithm for the quantile loss function with sample complexity nearly linear in the dimensionality of the data.
Based upon our row sampling algorithm, we give the fastest known algorithm for quantile regression and a graph sparsification algorithm for balanced directed graphs.
arXiv Detail & Related papers (2020-06-15T13:40:07Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.