Faster classical Boson Sampling
- URL: http://arxiv.org/abs/2005.04214v2
- Date: Tue, 26 May 2020 13:03:38 GMT
- Title: Faster classical Boson Sampling
- Authors: Peter Clifford and Rapha\"el Clifford
- Abstract summary: We present an algorithm for Boson Sampling whose average-case time complexity is much faster when $m$ is proportional to $n$.
This result further increases the problem size needed to establish quantum computational supremacy via Boson Sampling.
- Score: 0.15229257192293197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since its introduction Boson Sampling has been the subject of intense study
in the world of quantum computing. The task is to sample independently from the
set of all $n \times n$ submatrices built from possibly repeated rows of a
larger $m \times n$ complex matrix according to a probability distribution
related to the permanents of the submatrices. Experimental systems exploiting
quantum photonic effects can in principle perform the task at great speed. In
the framework of classical computing, Aaronson and Arkhipov (2011) showed that
exact Boson Sampling problem cannot be solved in polynomial time unless the
polynomial hierarchy collapses to the third level. Indeed for a number of years
the fastest known exact classical algorithm ran in $O({m+n-1 \choose n} n 2^n
)$ time per sample, emphasising the potential speed advantage of quantum
computation. The advantage was reduced by Clifford and Clifford (2018) who gave
a significantly faster classical solution taking $O(n 2^n +
\operatorname{poly}(m,n))$ time and linear space, matching the complexity of
computing the permanent of a single matrix when $m$ is polynomial in $n$.
We continue by presenting an algorithm for Boson Sampling whose average-case
time complexity is much faster when $m$ is proportional to $n$. In particular,
when $m = n$ our algorithm runs in approximately $O(n\cdot1.69^n)$ time on
average. This result further increases the problem size needed to establish
quantum computational supremacy via Boson Sampling.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Local Quantum Search Algorithm for Random $k$-SAT with $Ω(n^{1+ε})$ Clauses [0.0]
We show that max-k-SSAT is the computational algorithm on average when $m=Omega(n2+delta+epsilon)$ based on the average-case complexity theory.
We also prove that max-k-SSAT is on average when $m=Omega(n2+delta+epsilon)$ based on the average-case complexity theory.
arXiv Detail & Related papers (2024-03-05T15:03:47Z) - Towards large-scale quantum optimization solvers with few qubits [59.63282173947468]
We introduce a variational quantum solver for optimizations over $m=mathcalO(nk)$ binary variables using only $n$ qubits, with tunable $k>1$.
We analytically prove that the specific qubit-efficient encoding brings in a super-polynomial mitigation of barren plateaus as a built-in feature.
arXiv Detail & Related papers (2024-01-17T18:59:38Z) - Quantum Algorithms for the Pathwise Lasso [1.8058773918890538]
We present a novel quantum high-dimensional linear regression algorithm based on the classical LARS (Least Angle Regression) pathwise algorithm.
Our quantum algorithm provides the full regularisation path as the penalty term varies, but quadratically faster per iteration under specific conditions.
We prove, via an approximate version of the KKT conditions and a duality gap, that the LARS algorithm is robust to errors.
arXiv Detail & Related papers (2023-12-21T18:57:54Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Quantum Speedups for Bayesian Network Structure Learning [0.0]
For networks with $n$ nodes, the fastest known algorithms run in time $O(cn)$ in the worst case, with no improvement in two decades.
Inspired by recent advances in quantum computing, we ask whether BNSL admits a quantum speedup.
We give two algorithms achieving $c leq 1.817$ and $c leq 1.982$.
arXiv Detail & Related papers (2023-05-31T09:15:28Z) - On the average-case complexity of learning output distributions of
quantum circuits [55.37943886895049]
We show that learning the output distributions of brickwork random quantum circuits is average-case hard in the statistical query model.
This learning model is widely used as an abstract computational model for most generic learning algorithms.
arXiv Detail & Related papers (2023-05-09T20:53:27Z) - Quantum Clustering with k-Means: a Hybrid Approach [117.4705494502186]
We design, implement, and evaluate three hybrid quantum k-Means algorithms.
We exploit quantum phenomena to speed up the computation of distances.
We show that our hybrid quantum k-Means algorithms can be more efficient than the classical version.
arXiv Detail & Related papers (2022-12-13T16:04:16Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Quantum Algorithms for Sampling Log-Concave Distributions and Estimating
Normalizing Constants [8.453228628258778]
We develop quantum algorithms for sampling logconcave distributions and for estimating their normalizing constants.
We exploit quantum analogs of the Monte Carlo method and quantum walks.
We also prove a $1/epsilon1-o(1)$ quantum lower bound for estimating normalizing constants.
arXiv Detail & Related papers (2022-10-12T19:10:43Z) - Enhancing the Quantum Linear Systems Algorithm using Richardson
Extrapolation [0.8057006406834467]
We present a quantum algorithm to solve systems of linear equations of the form $Amathbfx=mathbfb$.
The algorithm achieves an exponential improvement with respect to $N$ over classical methods.
arXiv Detail & Related papers (2020-09-09T18:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.