Trainability of Dissipative Perceptron-Based Quantum Neural Networks
- URL: http://arxiv.org/abs/2005.12458v2
- Date: Fri, 10 Jun 2022 09:33:04 GMT
- Title: Trainability of Dissipative Perceptron-Based Quantum Neural Networks
- Authors: Kunal Sharma, M. Cerezo, Lukasz Cincio, Patrick J. Coles
- Abstract summary: We analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs)
We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits.
- Score: 0.8258451067861933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several architectures have been proposed for quantum neural networks (QNNs),
with the goal of efficiently performing machine learning tasks on quantum data.
Rigorous scaling results are urgently needed for specific QNN constructions to
understand which, if any, will be trainable at a large scale. Here, we analyze
the gradient scaling (and hence the trainability) for a recently proposed
architecture that we called dissipative QNNs (DQNNs), where the input qubits of
each layer are discarded at the layer's output. We find that DQNNs can exhibit
barren plateaus, i.e., gradients that vanish exponentially in the number of
qubits. Moreover, we provide quantitative bounds on the scaling of the gradient
for DQNNs under different conditions, such as different cost functions and
circuit depths, and show that trainability is not always guaranteed.
Related papers
- Projected Stochastic Gradient Descent with Quantum Annealed Binary Gradients [51.82488018573326]
We present QP-SBGD, a novel layer-wise optimiser tailored towards training neural networks with binary weights.
BNNs reduce the computational requirements and energy consumption of deep learning models with minimal loss in accuracy.
Our algorithm is implemented layer-wise, making it suitable to train larger networks on resource-limited quantum hardware.
arXiv Detail & Related papers (2023-10-23T17:32:38Z) - Scaling Limits of Quantum Repeater Networks [62.75241407271626]
Quantum networks (QNs) are a promising platform for secure communications, enhanced sensing, and efficient distributed quantum computing.
Due to the fragile nature of quantum states, these networks face significant challenges in terms of scalability.
In this paper, the scaling limits of quantum repeater networks (QRNs) are analyzed.
arXiv Detail & Related papers (2023-05-15T14:57:01Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Exponentially Many Local Minima in Quantum Neural Networks [9.442139459221785]
Quantum Neural Networks (QNNs) are important quantum applications because of their similar promises as classical neural networks.
We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training.
We empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based circuits.
arXiv Detail & Related papers (2021-10-06T03:23:44Z) - Branching Quantum Convolutional Neural Networks [0.0]
Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets.
We present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility.
arXiv Detail & Related papers (2020-12-28T19:00:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Absence of Barren Plateaus in Quantum Convolutional Neural Networks [0.0]
Quantum Convolutional Neural Networks (QCNNs) have been proposed.
We rigorously analyze the gradient scaling for the parameters in the QCNN architecture.
arXiv Detail & Related papers (2020-11-05T16:46:13Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.