Training-Free Certified Bounds for Quantum Regression: A Scalable Framework
- URL: http://arxiv.org/abs/2601.00745v1
- Date: Fri, 02 Jan 2026 17:05:48 GMT
- Title: Training-Free Certified Bounds for Quantum Regression: A Scalable Framework
- Authors: Demerson N. Gonçalves, Tharso D. Fernandes, Pedro H. G. Lugao, João T. Dias,
- Abstract summary: We present a training-free, certified error bound for quantum regression derived from Pauli expectation values.<n>We provide non-asymptotic statistical guarantees to certify performance within a practical measurement budget.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a training-free, certified error bound for quantum regression derived directly from Pauli expectation values. Generalizing the heuristic of minimum accuracy from classification to regression, we evaluate axis-aligned predictors within the Pauli feature space. We formally prove that the optimal axis-aligned predictor constitutes a rigorous upper bound on the minimum training Mean Squared Error (MSE) attainable by any linear or kernel-based regressor defined on the same quantum feature map. Since computing this exact bound requires an intractable scan of the full Pauli basis, we introduce a Monte Carlo framework to efficiently estimate it using a tractable subset of measurement axes. We further provide non-asymptotic statistical guarantees to certify performance within a practical measurement budget. This method enables rapid comparison of quantum feature maps and early diagnosis of expressivity, allowing for the informed selection of architectures before deploying higher-complexity models.
Related papers
- Universal Sample Complexity Bounds in Quantum Learning Theory via Fisher Information matrix [0.6738870040008694]
We show that the sample complexity required in quantum learning theory is governed by the inverse Fisher information matrix.<n>We identify the structural origin of exponential sample complexity in Pauli channel learning without entanglement.<n>As an application, we consider Pauli expectation estimation with entangled probes.
arXiv Detail & Related papers (2026-02-25T02:51:49Z) - Certified Lower Bounds and Efficient Estimation of Minimum Accuracy in Quantum Kernel Methods [0.0]
The minimum accuracy evaluates quantum feature maps without requiring full quantum support vector machine (QSVM) training.<n>This work generalizes the metric to arbitrary binary datasets and formally proves it constitutes a certified lower bound on the optimal empirical accuracy of any linear classifier in the same feature space.
arXiv Detail & Related papers (2025-12-23T18:34:58Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [55.80276145563105]
We investigate the statistical properties of Temporal Difference learning with Polyak-Ruppert averaging.<n>We make three theoretical contributions that improve upon the current state-of-the-art results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - A sparse PAC-Bayesian approach for high-dimensional quantile prediction [0.0]
This paper presents a novel probabilistic machine learning approach for high-dimensional quantile prediction.
It uses a pseudo-Bayesian framework with a scaled Student-t prior and Langevin Monte Carlo for efficient computation.
Its effectiveness is validated through simulations and real-world data, where it performs competitively against established frequentist and Bayesian techniques.
arXiv Detail & Related papers (2024-09-03T08:01:01Z) - Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.<n>We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.<n>We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Variational Inference with Coverage Guarantees in Simulation-Based Inference [18.818573945984873]
We propose Conformalized Amortized Neural Variational Inference (CANVI)
CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor.
We prove lower bounds on the predictive efficiency of the regions produced by CANVI and explore how the quality of a posterior approximation relates to the predictive efficiency of prediction regions based on that approximation.
arXiv Detail & Related papers (2023-05-23T17:24:04Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction
for Uncertainty Quantification [4.728311759896569]
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning.
It incorporates adaptively flexible distribution prediction of $mathbbP(mathbfy|mathbfX=x)$ in regression tasks.
On extensive regression tasks from UCI datasets, we show that EMQ achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-26T11:45:32Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.