Calculating the expected value function of a two-stage stochastic
optimization program with a quantum algorithm
- URL: http://arxiv.org/abs/2402.15029v1
- Date: Fri, 23 Feb 2024 00:07:34 GMT
- Title: Calculating the expected value function of a two-stage stochastic
optimization program with a quantum algorithm
- Authors: Caleb Rotello, Peter Graf, Matthew Reynolds, Cody James Winkleblack,
Wesley Jones
- Abstract summary: Two-stage optimization involves calculating the expected cost of future decisions to inform the best decision in the present.
We provide a quantum algorithm to estimate the expected value function in a two-stage optimization problem in time largely independent from the complexity of the random variable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic optimization problems are powerful models for the operation of
systems under uncertainty and are in general computationally intensive to
solve. Two-stage stochastic optimization is one such problem, where the
objective function involves calculating the expected cost of future decisions
to inform the best decision in the present. In general, even approximating this
expectation value is a #P-Hard problem. We provide a quantum algorithm to
estimate the expected value function in a two-stage stochastic optimization
problem in time complexity largely independent from the complexity of the
random variable. Our algorithm works in two steps: (1) By representing the
random variable in a register of qubits and using this register as control
logic for a cost Hamiltonian acting on the primary system of qubits, we use the
quantum alternating operator ansatz (QAOA) with operator angles following an
annealing schedule to converge to the minimal decision for each scenario in
parallel. (2) We then use Quantum Amplitude Estimation (QAE) to approximate the
expected value function of the per-scenario optimized wavefunction. We show
that the annealing time of the procedure in (1) is independent of the number of
scenarios in the probability distribution. Additionally, estimation error in
QAE converges inverse-linear in the number of "repetitions" of the algorithm,
as opposed to converging as the inverse of the square root in traditional Monte
Carlo sampling. Because both QAOA and QAE are expected to have polynomial
advantage over their classical computing counterparts, we expect our algorithm
to have polynomial advantage over classical methods to compute the expected
value function. We implement our algorithms for a simple optimization problem
inspired by operating the power grid with renewable generation and uncertainty
in the weather, and give numerical evidence to support our arguments.
Related papers
- Analysis of the Non-variational Quantum Walk-based Optimisation Algorithm [0.0]
This paper introduces in detail a non-variational quantum algorithm designed to solve a wide range of optimisation problems.
The algorithm returns optimal and near-optimal solutions from repeated preparation and measurement of an amplified state.
arXiv Detail & Related papers (2024-07-29T13:54:28Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Algorithmic Cluster Expansions for Quantum Problems [0.0]
We establish a general framework for developing approximation algorithms for a class of counting problems.
We apply our framework to approximating probability amplitudes of a class of quantum circuits close to the identity.
We show that our algorithmic condition is almost optimal for expectation values and optimal for thermal expectation values in the sense of zero freeness.
arXiv Detail & Related papers (2023-06-15T09:11:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Iteration Complexity of Variational Quantum Algorithms [5.203200173190989]
We argue that noise makes evaluations of the objective function via quantum circuits biased.
We derive the missing guarantees and find that the rate of convergence is unaffected.
arXiv Detail & Related papers (2022-09-21T19:18:41Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Quantum algorithms for numerical differentiation of expected values with
respect to parameters [0.0]
We consider an expected value of a function of a variable and a real-valued parameter, and how to calculate derivatives of the expectation of the parameter.
Based on QMCI and the general-order central difference formula for numerical differentiation, we propose two quantum methods for this problem.
We see that, depending on the smoothness of the function and the number of qubits available, either of two methods is better than the other.
arXiv Detail & Related papers (2021-11-22T06:50:25Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Stochastic Gauss-Newton Algorithms for Nonconvex Compositional
Optimization [26.313415590777858]
We develop two new Gauss-Newton algorithms for solving a class of non- compositional optimization problems.
We consider both the expectation and finite-sum settings under standard assumptions.
arXiv Detail & Related papers (2020-02-17T22:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.