Polynomial scaling of QAOA for ground-state preparation of the
fully-connected p-spin ferromagnet
- URL: http://arxiv.org/abs/2003.07419v2
- Date: Wed, 5 Aug 2020 13:31:56 GMT
- Title: Polynomial scaling of QAOA for ground-state preparation of the
fully-connected p-spin ferromagnet
- Authors: Matteo M. Wauters, Glen Bigan Mbeng, Giuseppe E. Santoro
- Abstract summary: We show that the quantum approximate optimization algorithm (QAOA) can construct with ferromagnetly scaling resources the ground state of the fully-optimal p-spin Ising.
We find that an appropriate QAOA parameter is necessary to achieve a good performance of the algorithm when the number of variational parameters $2rm P$ is much smaller than the system size $rm N$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that the quantum approximate optimization algorithm (QAOA) can
construct with polynomially scaling resources the ground state of the
fully-connected p-spin Ising ferromagnet, a problem that notoriously poses
severe difficulties to a Quantum Annealing (QA) approach, due to the
exponentially small gaps encountered at first-order phase transition for ${\rm
p} \ge 3$. For a target ground state at arbitrary transverse field, we find
that an appropriate QAOA parameter initialization is necessary to achieve a
good performance of the algorithm when the number of variational parameters
$2{\rm P}$ is much smaller than the system size ${\rm N}$, because of the large
number of sub-optimal local minima. Instead, when ${\rm P}$ exceeds a critical
value ${\rm P}^*_{\rm N} \propto {\rm N}$, the structure of the parameter space
simplifies, as all minima become degenerate. This allows to achieve the ground
state with perfect fidelity with a number of parameters scaling extensively
with ${\rm N}$, and with resources scaling polynomially with ${\rm N}$.
Related papers
- Trainability Barriers in Low-Depth QAOA Landscapes [0.0]
Quantum Alternating Operator Ansatz (QAOA) is a prominent variational quantum algorithm for solving optimization problems.
Previous results have given analytical performance guarantees for a small, fixed number of parameters.
We study the difficulty of training in the intermediate regime, which is the focus of most current numerical studies.
arXiv Detail & Related papers (2024-02-15T18:45:30Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Transformers as Support Vector Machines [54.642793677472724]
We establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem.
We characterize the implicit bias of 1-layer transformers optimized with gradient descent.
We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.
arXiv Detail & Related papers (2023-08-31T17:57:50Z) - Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum
Minimization [52.25843977506935]
We propose an adaptive variance method, called AdaSpider, for $L$-smooth, non-reduction functions with a finitesum structure.
In doing so, we are able to compute an $epsilon-stationary point with $tildeOleft + st/epsilon calls.
arXiv Detail & Related papers (2022-11-03T14:41:46Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - On Acceleration of Gradient-Based Empirical Risk Minimization using
Local Polynomial Regression [0.491574468325115]
We study the acceleration of the Local Polynomial Interpolation-based Gradient Descent method (LPIGD) recently proposed for the approximate solution empirical risk problems (ERM)
We focus on loss functions that are strongly convex and smooth with condition number $sigma$.
We propose two accelerated methods for the problem based on LPI-GD and show an oracle complexity of $tildeOleft(sqrtsigma md log (1/varepsilon)$.
arXiv Detail & Related papers (2022-04-16T02:39:45Z) - On the energy landscape of symmetric quantum signal processing [1.5301252700705212]
We prove that one particular global (called the global minimum) solution belongs to a $0$ on which the cost function is on which the cost function is a global.
We provide an explanation of a partial explanation of optimization algorithms.
arXiv Detail & Related papers (2021-10-11T04:16:48Z) - A New Framework for Variance-Reduced Hamiltonian Monte Carlo [88.84622104944503]
We propose a new framework of variance-reduced Hamiltonian Monte Carlo (HMC) methods for sampling from an $L$-smooth and $m$-strongly log-concave distribution.
We show that the unbiased gradient estimators, including SAGA and SVRG, based HMC methods achieve highest gradient efficiency with small batch size.
Experimental results on both synthetic and real-world benchmark data show that our new framework significantly outperforms the full gradient and gradient HMC approaches.
arXiv Detail & Related papers (2021-02-09T02:44:24Z) - Estimation in Tensor Ising Models [5.161531917413708]
We consider the problem of estimating the natural parameter of the $p$-tensor Ising model given a single sample from the distribution on $N$ nodes.
In particular, we show the $sqrt N$-consistency of the MPL estimate in the $p$-spin Sherrington-Kirkpatrick (SK) model.
We derive the precise fluctuations of the MPL estimate in the special case of the $p$-tensor Curie-Weiss model.
arXiv Detail & Related papers (2020-08-29T00:06:58Z) - Fast decentralized non-convex finite-sum optimization with recursive
variance reduction [19.540926205375857]
We describe a first-order gradient method, called GT-SARAH, that employs a SARAH-type variance reduction technique.
In particular, in a big-data regime such that $n = O(Nfrac12(lambda)3)$, this complexitys reduces to $O(Nfrac12Lepsilon-2)$, independent of the network complexity.
In addition, we show appropriate choices of local minibatch size balance the trade-offs between gradient complexity and communication complexity.
arXiv Detail & Related papers (2020-08-17T15:51:32Z) - A Simple Convergence Proof of Adam and Adagrad [74.24716715922759]
We show a proof of convergence between the Adam Adagrad and $O(d(N)/st)$ algorithms.
Adam converges with the same convergence $O(d(N)/st)$ when used with the default parameters.
arXiv Detail & Related papers (2020-03-05T01:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.