Tight Generalization Bound for Supervised Quantum Machine Learning
- URL: http://arxiv.org/abs/2510.24348v1
- Date: Tue, 28 Oct 2025 12:13:44 GMT
- Title: Tight Generalization Bound for Supervised Quantum Machine Learning
- Authors: Xin Wang, Rebing Wu,
- Abstract summary: We derive a tight generalization bound for quantum machine learning that is applicable to a wide range of supervised tasks, data, and models.<n>Our bound is both efficiently computable and free of big-O notation.<n>We show that our tight generalization upper bound holds even when labels are completely randomized.
- Score: 3.6641231031729173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive a tight generalization bound for quantum machine learning that is applicable to a wide range of supervised tasks, data, and models. Our bound is both efficiently computable and free of big-O notation. Furthermore, we point out that previous bounds relying on big-O notation may provide misleading suggestions regarding the generalization error. Our generalization bound demonstrates that for quantum machine learning models of arbitrary size and depth, the sample size is the most dominant factor governing the generalization error. Additionally, the spectral norm of the measurement observable, the bound and Lipschitz constant of the selected risk function also influence the generalization upper bound. However, the number of quantum gates, the number of qubits, data encoding methods, and hyperparameters chosen during the learning process such as batch size, epochs, learning rate, and optimizer do not significantly impact the generalization capability of quantum machine learning. We experimentally demonstrate the tightness of our generalization bound across classification and regression tasks. Furthermore, we show that our tight generalization upper bound holds even when labels are completely randomized. We thus bring clarity to the fundamental question of generalization in quantum machine learning.
Related papers
- Compute-Optimal LLMs Provably Generalize Better With Scale [102.29926217670926]
We develop generalization bounds on the pretraining objective of large language models (LLMs) in the compute-optimal regime.<n>We introduce a novel, fully empirical Freedman-type martingale concentration that tightens existing bounds by accounting for the variance of the loss function.<n>We produce a scaling law for the generalization gap, with bounds that become predictably stronger with scale.
arXiv Detail & Related papers (2025-04-21T16:26:56Z) - Quantum Reservoir Computing and Risk Bounds [1.2289361708127877]
We give specific, parameter-dependent bounds for two particular quantum reservoir classes.<n>Applying our results to classes with readout functions, we find that the risk bounds converge in the number of training samples.
arXiv Detail & Related papers (2025-01-15T08:06:03Z) - Few measurement shots challenge generalization in learning to classify entanglement [0.0]
This paper focuses on hybrid quantum learning techniques where classical machine-learning methods are paired with quantum algorithms.
We show that, in some settings, the uncertainty coming from a few measurement shots can be the dominant source of errors.
We introduce an estimator based on classical shadows that performs better in the big data, few copy regime.
arXiv Detail & Related papers (2024-11-10T21:20:21Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [62.46800898243033]
Recent progress in quantum learning theory prompts a question: can linear properties of a large-qubit circuit be efficiently learned from measurement data generated by varying classical inputs?<n>We prove that the sample complexity scaling linearly in $d$ is required to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.<n>We propose a kernel-based method leveraging classical shadows and truncated trigonometric expansions, enabling a controllable trade-off between prediction accuracy and computational overhead.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Quantum Kernel Machine Learning With Continuous Variables [0.0]
We represent quantum kernels as closed form functions for continuous variable quantum computing platforms.<n>We show every kernel can be expressed as the product of a Gaussian and an algebraic function of the parameters of the feature map.<n>We prove kernels defined by feature maps of infinite stellar rank, such as GKP-state encodings, can be approximated arbitrarily well by kernels defined by feature maps of finite stellar rank.
arXiv Detail & Related papers (2024-01-11T03:49:40Z) - Understanding quantum machine learning also requires rethinking
generalization [0.3683202928838613]
We show that traditional approaches to understanding generalization fail to explain the behavior of quantum models.
Experiments reveal that state-of-the-art quantum neural networks accurately fit random states and random labeling of training data.
arXiv Detail & Related papers (2023-06-23T12:04:13Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - Numerical evidence against advantage with quantum fidelity kernels on
classical data [12.621805903645711]
We show that quantum kernels suffer from exponential "flattening" of the spectrum as the number of qubits grows.
We provide extensive numerical evidence for this phenomenon utilizing multiple previously studied quantum feature maps and both synthetic and real data.
Our results show that unless novel techniques are developed to control the inductive bias of quantum kernels, they are unlikely to provide a quantum advantage on classical data.
arXiv Detail & Related papers (2022-11-29T19:23:11Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Bandwidth Enables Generalization in Quantum Kernel Models [16.940180366663903]
Recent results show that generalization of quantum models is hindered by the exponential size of the quantum feature space.
We show that changing the value of the bandwidth can take a model from provably not being able to generalize to any target function to good generalization for well-aligned targets.
arXiv Detail & Related papers (2022-06-14T08:41:08Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.