Quantifying the advantages of applying quantum approximate algorithms to portfolio optimisation
- URL: http://arxiv.org/abs/2410.16265v1
- Date: Mon, 21 Oct 2024 17:59:05 GMT
- Title: Quantifying the advantages of applying quantum approximate algorithms to portfolio optimisation
- Authors: Haomu Yuan, Christopher K. Long, Hugo V. Lepage, Crispin H. W. Barnes,
- Abstract summary: We present an end-to-end quantum approximate optimisation algorithm (QAOA) to solve the discrete global minimum variance portfolio model.
This model finds a portfolio of risky assets with the lowest possible risk contingent on the number of traded assets being discrete.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a quantum algorithm for portfolio optimisation. Specifically, We present an end-to-end quantum approximate optimisation algorithm (QAOA) to solve the discrete global minimum variance portfolio (DGMVP) model. This model finds a portfolio of risky assets with the lowest possible risk contingent on the number of traded assets being discrete. We provide a complete pipeline for this model and analyses its viability for noisy intermediate-scale quantum computers. We design initial states, a cost operator, and ans\"atze with hard mixing operators within a binary encoding. Further, we perform numerical simulations to analyse several optimisation routines, including layerwise optimisation, utilising COYBLA and dual annealing. Finally, we consider the impacts of thermal relaxation and stochastic measurement noise. We find dual annealing with a layerwise optimisation routine provides the most robust performance. We observe that realistic thermal relaxation noise levels preclude quantum advantage. However, stochastic measurement noise will dominate when hardware sufficiently improves. Within this regime, we numerically demonstrate a favourable scaling in the number of shots required to obtain the global minimum -- an indication of quantum advantage in portfolio optimisation.
Related papers
- Enhancing Quantum Expectation Values via Exponential Error Suppression and CVaR Optimization [2.526146573337397]
This paper presents a framework that combines Virtual Channel Purification (VCP) technique with Conditional Value-at-Risk (CVaR) optimization to improve expectation value estimations.
Our contributions are twofold: first, we derive conditions to compare CVaR values from different probability, offering insights into the reliability of quantum estimations under noise.
arXiv Detail & Related papers (2025-01-30T17:24:01Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - Optimal and robust error filtration for quantum information processing [0.0]
Error filtration is a hardware scheme that mitigates noise by exploiting auxiliary qubits and entangling gates.
We benchmark our approach against figures of merit that correspond to different applications.
arXiv Detail & Related papers (2024-09-02T17:58:44Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - Trainability Analysis of Quantum Optimization Algorithms from a Bayesian
Lens [2.9356265132808024]
We show that a noiseless QAOA circuit with a depth of $tildemathtlog nright)$ can be trained efficiently.
Our results offer theoretical performance of quantum algorithms in the noisy intermediate-scale quantum era.
arXiv Detail & Related papers (2023-10-10T02:56:28Z) - QAOA Performance in Noisy Devices: The Effect of Classical Optimizers and Ansatz Depth [0.32985979395737786]
The Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm for Near-term Intermediate-Scale Quantum computers (NISQ)
This paper presents an investigation into the impact realistic noise on the classical vectors.
We find that while there is no significant difference in the performance of classicals in a state simulation, the Adam and AMSGrads perform best in the presence of shot noise.
arXiv Detail & Related papers (2023-07-19T17:22:44Z) - Portfolio optimization with discrete simulated annealing [0.0]
We present an integer optimization method to find optimal portfolios in the presence of discretized convex and non- convex cost functions.
This allows us to achieve a solution with a given quality.
arXiv Detail & Related papers (2022-10-03T10:39:05Z) - Dynamic Asset Allocation with Expected Shortfall via Quantum Annealing [0.0]
We propose a hybrid quantum-classical algorithm to solve a dynamic asset allocation problem.
We compare the results from D-Wave's 2000Q and Advantage quantum annealers using real-world financial data.
Experiments on assets with higher correlations tend to perform better, which may help to design practical quantum applications in the near term.
arXiv Detail & Related papers (2021-12-06T17:39:43Z) - Variational Quantum Optimization with Multi-Basis Encodings [62.72309460291971]
We introduce a new variational quantum algorithm that benefits from two innovations: multi-basis graph complexity and nonlinear activation functions.
Our results in increased optimization performance, two increase in effective landscapes and a reduction in measurement progress.
arXiv Detail & Related papers (2021-06-24T20:16:02Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Bayesian Quantile and Expectile Optimisation [3.3878745408530833]
We propose new variational models for Bayesian quantile and expectile regression that are well-suited for heteroscedastic noise settings.
Our strategies can directly optimise for the quantile and expectile, without requiring replicating observations or assuming a parametric form for the noise.
As illustrated in the experimental section, the proposed approach clearly outperforms the state of the art in the heteroscedastic, non-Gaussian case.
arXiv Detail & Related papers (2020-01-12T20:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.