SPARTA: $χ^2$-calibrated, risk-controlled exploration-exploitation for variational quantum algorithms
- URL: http://arxiv.org/abs/2511.19551v1
- Date: Mon, 24 Nov 2025 13:54:01 GMT
- Title: SPARTA: $χ^2$-calibrated, risk-controlled exploration-exploitation for variational quantum algorithms
- Authors: Mikhail Zubarev,
- Abstract summary: Variational quantum algorithms face a fundamental trainability crisis: barren plateaus render optimization exponentially difficult as system size grows.<n>We present the sequential plateau-adaptive regime-testing algorithm (SPARTA) that provides explicit, anytime-valid risk control for quantum optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational quantum algorithms face a fundamental trainability crisis: barren plateaus render optimization exponentially difficult as system size grows. While recent Lie algebraic theory precisely characterizes when and why these plateaus occur, no practical optimization method exists with finite-sample guarantees for navigating them. We present the sequential plateau-adaptive regime-testing algorithm (SPARTA), the first measurement-frugal scheduler that provides explicit, anytime-valid risk control for quantum optimization. Our approach integrates three components with rigorous statistical foundations: (i) a $χ^2$-calibrated sequential test that distinguishes barren plateaus from informative regions using likelihood-ratio supermartingales; (ii) a probabilistic trust-region exploration strategy with one-sided acceptance to prevent false improvements under shot noise; and (iii) a theoretically-optimal exploitation phase that achieves the best attainable convergence rate. We prove geometric bounds on plateau exit times, linear convergence in informative basins, and show how Lie-algebraic variance proxies enhance test power without compromising statistical calibration.
Related papers
- Safeguarded Stochastic Polyak Step Sizes for Non-smooth Optimization: Robust Performance Without Small (Sub)Gradients [16.39606116102731]
The vanishing Polyak delivering adaptive neural network has proven to be a promising choice for gradient descent (SGD)<n> Comprehensive experiments on deep networks corroborate tight convex network theory.<n>In this work, we provide rigorous convergence guarantees for non-smooth optimization with no need for strong assumptions.
arXiv Detail & Related papers (2025-12-02T02:24:32Z) - Online Inference of Constrained Optimization: Primal-Dual Optimality and Sequential Quadratic Programming [55.848340925419286]
We study online statistical inference for the solutions of quadratic optimization problems with equality and inequality constraints.<n>We develop a sequential programming (SSQP) method to solve these problems, where the step direction is computed by sequentially performing an approximation of the objective and a linear approximation of the constraints.<n>We show that our method global almost moving-average convergence and exhibits local normality with an optimal primal-dual limiting matrix in the sense of Hjek and Le Cam.
arXiv Detail & Related papers (2025-11-27T06:16:17Z) - Test-time Verification via Optimal Transport: Coverage, ROC, & Sub-optimality [53.03186946689658]
Test-time scaling with verification has shown promise in improving the performance of large language models.<n>The effect of verification manifests through interactions of three quantities: (i) the generator's coverage, (ii) the verifier's region of convergence (ROC), and (iii) the sampling algorithm's sub-optimality.<n>We frame verifiable test-time scaling as a transport problem. This characterizes the interaction of coverage, ROC, and sub-optimality.
arXiv Detail & Related papers (2025-10-21T18:05:42Z) - Quantization through Piecewise-Affine Regularization: Optimization and Statistical Guarantees [13.571671030124604]
Piecewise regularization (PAR) provides a flexible modelingization based on the statistical perspectives.<n>We show how to use the PAR method to solve problems using gradient, and Alternating Direction Multipliers.
arXiv Detail & Related papers (2025-08-14T23:35:21Z) - Achieving $\widetilde{\mathcal{O}}(\sqrt{T})$ Regret in Average-Reward POMDPs with Known Observation Models [69.1820058966619]
We tackle average-reward infinite-horizon POMDPs with an unknown transition model.<n>We present a novel and simple estimator that overcomes this barrier.
arXiv Detail & Related papers (2025-01-30T22:29:41Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - FastPart: Over-Parameterized Stochastic Gradient Descent for Sparse optimisation on Measures [3.377298662011438]
This paper presents a novel algorithm that leverages Gradient Descent strategies in conjunction with Random Features to augment the scalability of Conic Particle Gradient Descent (CPGD)<n>We provide rigorous mathematical proofs demonstrating the following key findings: $mathrm(i)$ The total variation norms of the solution measures along the descent trajectory remain bounded, ensuring stability and preventing undesirable divergence; $mathrm(ii)$ We establish a global convergence guarantee with a convergence rate of $O(log(K)/sqrtK)$ over $K$, showcasing the efficiency and effectiveness of
arXiv Detail & Related papers (2023-12-10T20:41:43Z) - Distributionally Robust Optimization with Bias and Variance Reduction [9.341215359733601]
We show that Prospect, a gradient-based algorithm, enjoys linear convergence for smooth regularized losses.
We also show that Prospect can converge 2-3$times$ faster than baselines such as gradient-based methods.
arXiv Detail & Related papers (2023-10-21T00:03:54Z) - A Learning-Based Optimal Uncertainty Quantification Method and Its
Application to Ballistic Impact Problems [1.713291434132985]
This paper concerns the optimal (supremum and infimum) uncertainty bounds for systems where the input (or prior) measure is only partially/imperfectly known.
We demonstrate the learning based framework on the uncertainty optimization problem.
We show that the approach can be used to construct maps for the performance certificate and safety in engineering practice.
arXiv Detail & Related papers (2022-12-28T14:30:53Z) - Fully Stochastic Trust-Region Sequential Quadratic Programming for
Equality-Constrained Optimization Problems [62.83783246648714]
We propose a sequential quadratic programming algorithm (TR-StoSQP) to solve nonlinear optimization problems with objectives and deterministic equality constraints.
The algorithm adaptively selects the trust-region radius and, compared to the existing line-search StoSQP schemes, allows us to utilize indefinite Hessian matrices.
arXiv Detail & Related papers (2022-11-29T05:52:17Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.