Near-optimal Prediction Error Estimation for Quantum Machine Learning Models
- URL: http://arxiv.org/abs/2510.18208v1
- Date: Tue, 21 Oct 2025 01:22:05 GMT
- Title: Near-optimal Prediction Error Estimation for Quantum Machine Learning Models
- Authors: Qiuhao Chen, Yuling Jiao, Yinan Li, Xiliang Lu, Jerry Zhijian Yang,
- Abstract summary: Quantum machine learning (QML) models can be significantly affected by the limited access to the underlying data set.<n>Previous studies have focused on proving generalization error bounds for any QML models trained on a finite training set.<n>We focus on the optimal QML models obtained by training them on a finite training set and establish a tight prediction error bound in terms of the number of trainable gates and the size of training sets.
- Score: 20.38743409927907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding the theoretical capabilities and limitations of quantum machine learning (QML) models to solve machine learning tasks is crucial to advancing both quantum software and hardware developments. Similarly to the classical setting, the performance of QML models can be significantly affected by the limited access to the underlying data set. Previous studies have focused on proving generalization error bounds for any QML models trained on a limited finite training set. We focus on the optimal QML models obtained by training them on a finite training set and establish a tight prediction error bound in terms of the number of trainable gates and the size of training sets. To achieve this, we derive covering number upper bounds and packing number lower bounds for the data re-uploading QML models and linear QML models, respectively, which may be of independent interest. We support our theoretical findings by numerically simulating the QML strategies for function approximation and quantum phase recognition.
Related papers
- Quantum-Assisted Machine Learning Models for Enhanced Weather Prediction [0.30903157777186735]
Quantum Machine Learning (QML) presents as a revolutionary approach to weather forecasting by using quantum computing to improve predictive modeling capabilities.<n>In this study, we apply QML models, including Quantum Gated Recurrent Units (QGRUs), Quantum Neural Networks (QNNs), Quantum Long Short-Term Memory(QLSTM), Variational Quantum Circuits(VQCs), and Quantum Support Vector Machines(QSVMs)<n>Results demonstrate that QML models can achieve reasonable accuracy in both prediction and classification tasks, particularly in binary classification.<n>This research provides insights into the feasibility of QML for weather prediction, paving
arXiv Detail & Related papers (2025-03-30T12:03:27Z) - RoSTE: An Efficient Quantization-Aware Supervised Fine-Tuning Approach for Large Language Models [53.571195477043496]
We propose an algorithm named Rotated Straight-Through-Estimator (RoSTE)<n>RoSTE combines quantization-aware supervised fine-tuning (QA-SFT) with an adaptive rotation strategy to reduce activation outliers.<n>Our findings reveal that the prediction error is directly proportional to the quantization error of the converged weights, which can be effectively managed through an optimized rotation configuration.
arXiv Detail & Related papers (2025-02-13T06:44:33Z) - Learning to Measure Quantum Neural Networks [10.617463958884528]
We introduce a novel approach that makes the observable of the quantum system-specifically, the Hermitian matrix-learnable.<n>Our method features an end-to-end differentiable learning framework, where the parameterized observable is trained alongside the ordinary quantum circuit parameters.<n>Using numerical simulations, we show that the proposed method can identify observables for variational quantum circuits that lead to improved outcomes.
arXiv Detail & Related papers (2025-01-10T02:28:19Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - Computable Model-Independent Bounds for Adversarial Quantum Machine Learning [4.857505043608425]
We introduce the first of an approximate lower bound for adversarial error when evaluating model resilience against quantum-based adversarial attacks.
In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models.
arXiv Detail & Related papers (2024-11-11T10:56:31Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Transition Role of Entangled Data in Quantum Machine Learning [51.6526011493678]
Entanglement serves as the resource to empower quantum computing.
Recent progress has highlighted its positive impact on learning quantum dynamics.
We establish a quantum no-free-lunch (NFL) theorem for learning quantum dynamics using entangled data.
arXiv Detail & Related papers (2023-06-06T08:06:43Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Subtleties in the trainability of quantum machine learning models [0.0]
We show that gradient scaling results for Variational Quantum Algorithms can be applied to study the gradient scaling of Quantum Machine Learning models.
Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML.
arXiv Detail & Related papers (2021-10-27T20:28:53Z) - Towards Efficient Post-training Quantization of Pre-trained Language
Models [85.68317334241287]
We study post-training quantization(PTQ) of PLMs, and propose module-wise quantization error minimization(MREM), an efficient solution to mitigate these issues.
Experiments on GLUE and SQuAD benchmarks show that our proposed PTQ solution not only performs close to QAT, but also enjoys significant reductions in training time, memory overhead, and data consumption.
arXiv Detail & Related papers (2021-09-30T12:50:06Z) - Structural risk minimization for quantum linear classifiers [0.0]
Quantum machine learning (QML) stands out as one of the typically highlighted candidates for quantum computing's near-term "killer application"
We investigate capacity measures of two closely related QML models called explicit and implicit quantum linear classifiers.
We identify that the rank and Frobenius norm of the observables used in the QML model closely control the model's capacity.
arXiv Detail & Related papers (2021-05-12T10:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.