Operator Sampling for Shot-frugal Optimization in Variational Algorithms
- URL: http://arxiv.org/abs/2004.06252v1
- Date: Tue, 14 Apr 2020 01:00:07 GMT
- Title: Operator Sampling for Shot-frugal Optimization in Variational Algorithms
- Authors: Andrew Arrasmith, Lukasz Cincio, Rolando D. Somma, Patrick J. Coles
- Abstract summary: We introduce a strategy for reducing the number of measurements (i.e., shots) by randomly sampling operators $h_i$ from the overall $H = sum_i c_i h_i$.
We implement this and other Samplings to find the ground states of molecules H$$, LiH, and BeH$, without and with quantum hardware noise, and Rosalin outperforms other Samplings in most cases.
- Score: 0.860968116787044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum chemistry is a near-term application for quantum computers. This
application may be facilitated by variational quantum-classical algorithms
(VQCAs), although a concern for VQCAs is the large number of measurements
needed for convergence, especially for chemical accuracy. Here we introduce a
strategy for reducing the number of measurements (i.e., shots) by randomly
sampling operators $h_i$ from the overall Hamiltonian $H = \sum_i c_i h_i$. In
particular, we employ weighted sampling, which is important when the $c_i$'s
are highly non-uniform, as is typical in chemistry. We integrate this strategy
with an adaptive optimizer developed recently by our group to construct an
improved optimizer called Rosalin (Random Operator Sampling for Adaptive
Learning with Individual Number of shots). Rosalin implements stochastic
gradient descent while adapting the shot noise for each partial derivative and
randomly assigning the shots amongst the $h_i$ according to a weighted
distribution. We implement this and other optimizers to find the ground states
of molecules H$_2$, LiH, and BeH$_2$, without and with quantum hardware noise,
and Rosalin outperforms other optimizers in most cases.
Related papers
- A Multilevel Approach For Solving Large-Scale QUBO Problems With Noisy Hybrid Quantum Approximate Optimization [3.3493770627144004]
We experimentally test how existing quantum processing units (QPUs) perform as subsolvers within a multilevel strategy.
We find approximate solutions to $10$ instances of fully connected $82$-qubit Sherrington-Kirkpatrick graphs.
We observe that quantum optimization results are competitive regarding the quality of solutions compared to classicals.
arXiv Detail & Related papers (2024-08-14T20:06:32Z) - Semi-Discrete Optimal Transport: Nearly Minimax Estimation With Stochastic Gradient Descent and Adaptive Entropic Regularization [38.67914746910537]
We prove an $mathcalO(t-1)$ lower bound rate for the OT map, using the similarity between Laguerre cells estimation and density support estimation.
To nearly achieve the desired fast rate, we design an entropic regularization scheme decreasing with the number of samples.
arXiv Detail & Related papers (2024-05-23T11:46:03Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Quantum Algorithms for Reinforcement Learning with a Generative Model [16.168901236223117]
Reinforcement learning studies how an agent should interact with an environment to maximize its cumulative reward.
We design quantum algorithms that approximate an optimal policy ($pi*$), the optimal value function ($v*$), and the optimal $Q$-function ($q*$)
We show that our quantum algorithm for computing $q*$ is optimal by proving a matching quantum lower bound.
arXiv Detail & Related papers (2021-12-15T19:51:49Z) - Adaptive shot allocation for fast convergence in variational quantum
algorithms [0.0]
We present a new gradient descent method using an adaptive number of shots at each step, called the global Coupled Adaptive Number of Shots (gCANS) method.
These improvements reduce both the time and money required to run VQAs on current cloud platforms.
arXiv Detail & Related papers (2021-08-23T22:29:44Z) - Reducing the Variance of Gaussian Process Hyperparameter Optimization
with Preconditioning [54.01682318834995]
Preconditioning is a highly effective step for any iterative method involving matrix-vector multiplication.
We prove that preconditioning has an additional benefit that has been previously unexplored.
It simultaneously can reduce variance at essentially negligible cost.
arXiv Detail & Related papers (2021-07-01T06:43:11Z) - Adaptive Random Quantum Eigensolver [0.0]
We introduce a general method to parametrize and optimize the probability density function of a random number generator.
Our optimization is based on two figures of merit: learning speed and learning accuracy.
arXiv Detail & Related papers (2021-06-28T12:01:05Z) - APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy [49.3037538647714]
We present APQ for efficient deep learning inference on resource-constrained hardware.
Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner.
With the same accuracy, APQ reduces the latency/energy by 2x/1.3x over MobileNetV2+HAQ.
arXiv Detail & Related papers (2020-06-15T16:09:17Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.