Polynomial speedup in Torontonian calculation by a scalable recursive
algorithm
- URL: http://arxiv.org/abs/2109.04528v3
- Date: Tue, 15 Nov 2022 11:47:56 GMT
- Title: Polynomial speedup in Torontonian calculation by a scalable recursive
algorithm
- Authors: \'Agoston Kaposi, Zolt\'an Kolarovszki, Tam\'as Kozsik, Zolt\'an
Zimbor\'as, P\'eter Rakyta
- Abstract summary: The Torontonian function is a central computational challenge in the simulation of Gaussian Boson Sampling (GBS) with threshold detection.
We propose a recursive algorithm providing a speedup in the exact calculation of the Torontonian compared to state-of-the-art algorithms.
We show that the algorithm can be scaled up to use cases making feasible the simulation of threshold GBS up to $35-40$ photon without the needs of large-scale computational capacities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Evaluating the Torontonian function is a central computational challenge in
the simulation of Gaussian Boson Sampling (GBS) with threshold detection. In
this work, we propose a recursive algorithm providing a polynomial speedup in
the exact calculation of the Torontonian compared to state-of-the-art
algorithms. According to our numerical analysis the complexity of the algorithm
is proportional to $N^{1.0691}2^{N/2}$ with $N$ being the size of the problem.
We also show that the recursive algorithm can be scaled up to HPC use cases
making feasible the simulation of threshold GBS up to $35-40$ photon clicks
without the needs of large-scale computational capacities.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - Making RL with Preference-based Feedback Efficient via Randomization [11.019088464664696]
Reinforcement Learning algorithms that learn from human feedback need to be efficient in terms of statistical complexity, computational complexity, and query complexity.
We present an algorithm that is sample efficient (i.e. has near-optimal-case regret bounds) and has running time (i.e. computational complexity is worst with respect to relevant parameters)
To extend the results to more general nonlinear function approximation, we design a model-based randomized algorithm inspired by the idea of Thompson sampling.
arXiv Detail & Related papers (2023-10-23T04:19:35Z) - Quantum-Based Feature Selection for Multi-classification Problem in
Complex Systems with Edge Computing [15.894122816099133]
A quantum-based feature selection algorithm for the multi-classification problem, namely, QReliefF, is proposed.
Our algorithm is superior in finding the nearest neighbor, reducing the complexity from O(M) to O(sqrt(M)).
arXiv Detail & Related papers (2023-10-01T03:57:13Z) - Information-Computation Tradeoffs for Learning Margin Halfspaces with
Random Classification Noise [50.64137465792738]
We study the problem of PAC $gamma$-margin halfspaces with Random Classification Noise.
We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms.
arXiv Detail & Related papers (2023-06-28T16:33:39Z) - Polynomial T-depth Quantum Solvability of Noisy Binary Linear Problem:
From Quantum-Sample Preparation to Main Computation [0.0]
We present a complete analysis of the quantum solvability of the noisy binary linear problem (NBLP)
We show that the cost of solving the NBLP can be in the problem size, at the expense of an exponentially increasing logical qubits.
arXiv Detail & Related papers (2021-09-23T07:46:20Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Quadratic speedup for simulating Gaussian boson sampling [0.9236074230806577]
We introduce an algorithm for the classical simulation of Gaussian boson sampling that is quadratically faster than previously known methods.
The complexity of the algorithm is exponential in the number of photon pairs detected, not the number of photons.
We show that an improved loop hafnian algorithm can be used to compute pure-state probabilities without the need of a supercomputer.
arXiv Detail & Related papers (2020-10-29T13:53:30Z) - On Thompson Sampling with Langevin Algorithms [106.78254564840844]
Thompson sampling for multi-armed bandit problems enjoys favorable performance in both theory and practice.
It suffers from a significant limitation computationally, arising from the need for samples from posterior distributions at every iteration.
We propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling to address this issue.
arXiv Detail & Related papers (2020-02-23T22:35:29Z) - Quantum Relief Algorithm [12.599184944451833]
Relief algorithm is a feature selection algorithm used in binary classification proposed by Kira and Rendell.
A quantum feature selection algorithm based on Relief algorithm, also called quantum Relief algorithm, is proposed.
arXiv Detail & Related papers (2020-02-01T10:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.