Introducing Reduced-Width QNNs, an AI-inspired Ansatz Design Pattern
- URL: http://arxiv.org/abs/2306.05047v3
- Date: Mon, 8 Jan 2024 09:33:41 GMT
- Title: Introducing Reduced-Width QNNs, an AI-inspired Ansatz Design Pattern
- Authors: Jonas Stein, Tobias Rohe, Francesco Nappi, Julian Hager, David Bucher,
Maximilian Zorn, Michael K\"olle, Claudia Linnhoff-Popien
- Abstract summary: Variational Quantum Algorithms are one of the most promising candidates to yield the first industrially relevant quantum advantage.
They are often referred to as Quantum Neural Networks (QNNs) when being used in analog settings as classical Artificial Neural Networks (ANNs)
We propose a reduced-width circuit ansatz design, which is motivated by recent results gained in the analysis of dropout regularization in QNNs.
- Score: 3.757262277494307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational Quantum Algorithms are one of the most promising candidates to
yield the first industrially relevant quantum advantage. Being capable of
arbitrary function approximation, they are often referred to as Quantum Neural
Networks (QNNs) when being used in analog settings as classical Artificial
Neural Networks (ANNs). Similar to the early stages of classical machine
learning, known schemes for efficient architectures of these networks are
scarce. Exploring beyond existing design patterns, we propose a reduced-width
circuit ansatz design, which is motivated by recent results gained in the
analysis of dropout regularization in QNNs. More precisely, this exploits the
insight, that the gates of overparameterized QNNs can be pruned substantially
until their expressibility decreases. The results of our case study show, that
the proposed design pattern can significantly reduce training time while
maintaining the same result quality as the standard "full-width" design in the
presence of noise.
Related papers
- Enhancing Expressivity of Quantum Neural Networks Based on the SWAP test [0.0]
quantum neural network (QNN) built exclusively from SWAP test circuits.<n>We discuss its mathematical equivalence to a classical two-layer feedforward network with quadratic activation functions under amplitude encoding.<n>We introduce a circuit modification using generalized SWAP test circuits that effectively implements classical neural networks with product layers.
arXiv Detail & Related papers (2025-06-20T12:05:31Z) - CTRQNets & LQNets: Continuous Time Recurrent and Liquid Quantum Neural Networks [76.53016529061821]
Liquid Quantum Neural Network (LQNet) and Continuous Time Recurrent Quantum Neural Network (CTRQNet) developed.
LQNet and CTRQNet achieve accuracy increases as high as 40% on CIFAR 10 through binary classification.
arXiv Detail & Related papers (2024-08-28T00:56:03Z) - Exploiting the equivalence between quantum neural networks and perceptrons [2.598133279943607]
Quantum machine learning models based on parametrized quantum circuits are considered to be among the most promising candidates for applications on quantum devices.
We explore the expressivity and inductive bias of QNNs by exploiting an exact mapping from QNNs with inputs $x$ to classical perceptrons acting on $x otimes x$.
arXiv Detail & Related papers (2024-07-05T09:19:58Z) - Studying the Impact of Quantum-Specific Hyperparameters on Hybrid Quantum-Classical Neural Networks [4.951980887762045]
hybrid quantum-classical neural networks (HQNNs) represent a promising solution that combines the strengths of classical machine learning with quantum computing capabilities.
In this paper, we investigate the impact of these variations on different HQNN models for image classification tasks, implemented on the PennyLane framework.
We aim to uncover intuitive and counter-intuitive learning patterns of HQNN models within granular levels of controlled quantum perturbations, to form a sound basis for their correlation to accuracy and training time.
arXiv Detail & Related papers (2024-02-16T11:44:25Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Quantum Recurrent Neural Networks for Sequential Learning [11.133759363113867]
We propose a new kind of quantum recurrent neural network (QRNN) to find quantum advantageous applications in the near term.
Our QRNN is built by stacking the QRBs in a staggered way that can greatly reduce the algorithm's requirement with regard to the coherent time of quantum devices.
The numerical experiments show that our QRNN achieves much better performance in prediction (classification) accuracy against the classical RNN and state-of-the-art QNN models for sequential learning.
arXiv Detail & Related papers (2023-02-07T04:04:39Z) - Exponentially Many Local Minima in Quantum Neural Networks [9.442139459221785]
Quantum Neural Networks (QNNs) are important quantum applications because of their similar promises as classical neural networks.
We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training.
We empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based circuits.
arXiv Detail & Related papers (2021-10-06T03:23:44Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Branching Quantum Convolutional Neural Networks [0.0]
Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets.
We present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility.
arXiv Detail & Related papers (2020-12-28T19:00:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.