Quantum Subgradient Estimation for Conditional Value-at-Risk Optimization
- URL: http://arxiv.org/abs/2510.04736v1
- Date: Mon, 06 Oct 2025 12:09:43 GMT
- Title: Quantum Subgradient Estimation for Conditional Value-at-Risk Optimization
- Authors: Vasilis Skarlatos, Nikos Konofaos,
- Abstract summary: Conditional Value-at-Risk (CVaR) is a leading tail-risk measure in finance.<n>We analyze a quantum subgradient oracle for CVaR minimization based on amplitude estimation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional Value-at-Risk (CVaR) is a leading tail-risk measure in finance, central to both regulatory and portfolio optimization frameworks. Classical estimation of CVaR and its gradients relies on Monte Carlo simulation, incurring $O(1/\epsilon^2)$ sample complexity to achieve $\epsilon$-accuracy. In this work, we design and analyze a quantum subgradient oracle for CVaR minimization based on amplitude estimation. Via a tripartite proposition, we show that CVaR subgradients can be estimated with $O(1/\epsilon)$ quantum queries, even when the Value-at-Risk (VaR) threshold itself must be estimated. We further quantify the propagation of estimation error from the VaR stage to CVaR gradients and derive convergence rates of stochastic projected subgradient descent using this oracle. Our analysis establishes a near-quadratic improvement in query complexity over classical Monte Carlo. Numerical experiments with simulated quantum circuits confirm the theoretical rates and illustrate robustness to threshold estimation noise. This constitutes the first rigorous complexity analysis of quantum subgradient methods for tail-risk minimization.
Related papers
- High-accuracy log-concave sampling with stochastic queries [70.90863485771405]
We show that high-accuracy guarantees for log-concave sampling are achievable using iteration gradients with subexponential tails.<n>Our framework also provides similar high accuracy guarantees under zeroth order (value) queries.
arXiv Detail & Related papers (2026-02-15T23:19:07Z) - Stabilized Maximum-Likelihood Iterative Quantum Amplitude Estimation for Structural CVaR under Correlated Random Fields [0.0]
Conditional Value-at-Risk (CVaR) is a central tail-risk measure in structural mechanics.<n>We develop a quantum-enhanced inference framework that casts CVaR evaluation as a confidence-constrained maximum-likelihood amplitude estimation problem.
arXiv Detail & Related papers (2026-02-10T14:53:26Z) - Almost Asymptotically Optimal Active Clustering Through Pairwise Observations [59.20614082241528]
We propose a new analysis framework for clustering $M$ items into an unknown number of $K$ distinct groups using noisy and actively collected responses.<n>We establish a fundamental lower bound on the expected number of queries needed to achieve a desired confidence in the accuracy of the clustering.<n>We develop a computationally feasible variant of the Generalized Likelihood Ratio statistic and show that its performance gap to the lower bound can be accurately empirically estimated.
arXiv Detail & Related papers (2026-02-05T14:16:47Z) - Importance Weighted Variational Inference without the Reparameterization Trick [7.6837301319181535]
We show that state-of-the-art VIMCO gradient estimators exhibit a vanishing signal-to-noise ratio (SNR) as $N$ increases.<n>We propose the novel VIMCO-$star$ gradient estimator and show that it averts the SNR collapse of existing VIMCO gradient estimators.
arXiv Detail & Related papers (2026-02-01T19:39:30Z) - Error Propagation in Dynamic Programming: From Stochastic Control to Option Pricing [0.12891210250935145]
This paper investigates theoretical and methodological foundations for optimal control (SOC) in discrete time.<n>We start formulating the control problem in a general dynamic programming framework, introducing the mathematical structure needed for a detailed convergence analysis.<n>We illustrate how our analysis naturally applies to a key financial application: the pricing of American options.
arXiv Detail & Related papers (2025-09-24T15:30:19Z) - Finite-Sample Analysis of Policy Evaluation for Robust Average Reward Reinforcement Learning [50.81240969750462]
We present first finite-sample analysis of policy evaluation in robust average Markov Decision Processes (PMDs)<n>We show that the robust Bellman operator is a contraction under a carefully constructed semi-norm, and developing a framework with controlled bias.<n>Our method achieves the order-optimal sample complexity of $tildemathcalO(epsilon-2)$ for robust policy evaluation and robust average reward estimation.
arXiv Detail & Related papers (2025-02-24T03:55:09Z) - Enhancing Quantum Expectation Values via Exponential Error Suppression and CVaR Optimization [2.526146573337397]
This paper presents a framework that combines Virtual Channel Purification (VCP) technique with Conditional Value-at-Risk (CVaR) optimization to improve expectation value estimations.<n>Our contributions are twofold: first, we derive conditions to compare CVaR values from different probability, offering insights into the reliability of quantum estimations under noise.
arXiv Detail & Related papers (2025-01-30T17:24:01Z) - Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Langevin dynamics based algorithm e-TH$\varepsilon$O POULA for stochastic optimization problems with discontinuous stochastic gradient [6.563379950720334]
We introduce a new Langevin dynamics based algorithm, called e-TH$varepsilon$O POULA, to solve optimization problems with discontinuous gradients.
Three key applications in finance and insurance are provided, namely, multi-period portfolio optimization, transfer learning in multi-period portfolio optimization, and insurance claim prediction.
arXiv Detail & Related papers (2022-10-24T13:10:06Z) - Posterior Coreset Construction with Kernelized Stein Discrepancy for
Model-Based Reinforcement Learning [78.30395044401321]
We develop a novel model-based approach to reinforcement learning (MBRL)
It relaxes the assumptions on the target transition model to belong to a generic family of mixture models.
It can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.
arXiv Detail & Related papers (2022-06-02T17:27:49Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Sparse Representations of Positive Functions via First and Second-Order
Pseudo-Mirror Descent [15.340540198612823]
We consider expected risk problems when the range of the estimator is required to be nonnegative.
We develop first and second-order variants of approximation mirror descent employing emphpseudo-gradients.
Experiments demonstrate favorable performance on ingeneous Process intensity estimation in practice.
arXiv Detail & Related papers (2020-11-13T21:54:28Z) - Large-Scale Methods for Distributionally Robust Optimization [53.98643772533416]
We prove that our algorithms require a number of evaluations gradient independent of training set size and number of parameters.
Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
arXiv Detail & Related papers (2020-10-12T17:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.