A Post-Training Approach for Mitigating Overfitting in Quantum
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2309.01829v2
- Date: Sun, 3 Mar 2024 13:44:23 GMT
- Title: A Post-Training Approach for Mitigating Overfitting in Quantum
Convolutional Neural Networks
- Authors: Aakash Ravindra Shinde, Charu Jain, and Amir Kalev
- Abstract summary: We study post-training approaches for mitigating overfitting in Quantum convolutional neural network (QCNN)
We find that a straightforward adaptation of a classical post-training method, known as neuron dropout, to the quantum setting leads to a substantial decrease in success probability of the QCNN.
We argue that this effect exposes the crucial role of entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss.
- Score: 0.24578723416255752
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Quantum convolutional neural network (QCNN), an early application for quantum
computers in the NISQ era, has been consistently proven successful as a machine
learning (ML) algorithm for several tasks with significant accuracy. Derived
from its classical counterpart, QCNN is prone to overfitting. Overfitting is a
typical shortcoming of ML models that are trained too closely to the availed
training dataset and perform relatively poorly on unseen datasets for a similar
problem. In this work we study post-training approaches for mitigating
overfitting in QCNNs. We find that a straightforward adaptation of a classical
post-training method, known as neuron dropout, to the quantum setting leads to
a significant and undesirable consequence: a substantial decrease in success
probability of the QCNN. We argue that this effect exposes the crucial role of
entanglement in QCNNs and the vulnerability of QCNNs to entanglement loss.
Hence, we propose a parameter adaptation method as an alternative method. Our
method is computationally efficient and is found to successfully handle
overfitting in the test cases.
Related papers
- Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Splitting and Parallelizing of Quantum Convolutional Neural Networks for
Learning Translationally Symmetric Data [0.0]
We propose a novel architecture called split-parallelizing QCNN (sp-QCNN)
By splitting the quantum circuit based on translational symmetry, the sp-QCNN can substantially parallelize the conventional QCNN without increasing the number of qubits.
We show that the sp-QCNN can achieve comparable classification accuracy to the conventional QCNN while considerably reducing the measurement resources required.
arXiv Detail & Related papers (2023-06-12T18:00:08Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - QVIP: An ILP-based Formal Verification Approach for Quantized Neural
Networks [14.766917269393865]
Quantization has emerged as a promising technique to reduce the size of neural networks with comparable accuracy as their floating-point numbered counterparts.
We propose a novel and efficient formal verification approach for QNNs.
In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints.
arXiv Detail & Related papers (2022-12-10T03:00:29Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Scalable Quantum Convolutional Neural Networks [12.261689483681145]
We propose a new version of quantum neural network (QCNN) named scalable quantum convolutional neural network (sQCNN)
In addition, using the fidelity of QC, we propose an sQCNN training algorithm named reverse fidelity training (RF-Train) that maximizes the performance of sQCNN.
arXiv Detail & Related papers (2022-09-26T02:07:00Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.