Exponentially Many Local Minima in Quantum Neural Networks
- URL: http://arxiv.org/abs/2110.02479v1
- Date: Wed, 6 Oct 2021 03:23:44 GMT
- Title: Exponentially Many Local Minima in Quantum Neural Networks
- Authors: Xuchen You, Xiaodi Wu
- Abstract summary: Quantum Neural Networks (QNNs) are important quantum applications because of their similar promises as classical neural networks.
We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training.
We empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based circuits.
- Score: 9.442139459221785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Neural Networks (QNNs), or the so-called variational quantum
circuits, are important quantum applications both because of their similar
promises as classical neural networks and because of the feasibility of their
implementation on near-term intermediate-size noisy quantum machines (NISQ).
However, the training task of QNNs is challenging and much less understood. We
conduct a quantitative investigation on the landscape of loss functions of QNNs
and identify a class of simple yet extremely hard QNN instances for training.
Specifically, we show for typical under-parameterized QNNs, there exists a
dataset that induces a loss function with the number of spurious local minima
depending exponentially on the number of parameters. Moreover, we show the
optimality of our construction by providing an almost matching upper bound on
such dependence. While local minima in classical neural networks are due to
non-linear activations, in quantum neural networks local minima appear as a
result of the quantum interference phenomenon. Finally, we empirically confirm
that our constructions can indeed be hard instances in practice with typical
gradient-based optimizers, which demonstrates the practical value of our
findings.
Related papers
- Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Statistical Analysis of Quantum State Learning Process in Quantum Neural
Networks [4.852613028421959]
Quantum neural networks (QNNs) have been a promising framework in pursuing near-term quantum advantage.
We develop a no-go theorem for learning an unknown quantum state with QNNs even starting from a high-fidelity initial state.
arXiv Detail & Related papers (2023-09-26T14:54:50Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Trainability of Dissipative Perceptron-Based Quantum Neural Networks [0.8258451067861933]
We analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs)
We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits.
arXiv Detail & Related papers (2020-05-26T00:59:09Z) - Entanglement Classification via Neural Network Quantum States [58.720142291102135]
In this paper we combine machine-learning tools and the theory of quantum entanglement to perform entanglement classification for multipartite qubit systems in pure states.
We use a parameterisation of quantum systems using artificial neural networks in a restricted Boltzmann machine (RBM) architecture, known as Neural Network Quantum States (NNS)
arXiv Detail & Related papers (2019-12-31T07:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.