Power and limitations of single-qubit native quantum neural networks
- URL: http://arxiv.org/abs/2205.07848v1
- Date: Mon, 16 May 2022 17:58:27 GMT
- Title: Power and limitations of single-qubit native quantum neural networks
- Authors: Zhan Yu, Hongshun Yao, Mujin Li, Xin Wang
- Abstract summary: Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
- Score: 5.526775342940154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural networks (QNNs) have emerged as a leading strategy to
establish applications in machine learning, chemistry, and optimization. While
the applications of QNN have been widely investigated, its theoretical
foundation remains less understood. In this paper, we formulate a theoretical
framework for the expressive ability of data re-uploading quantum neural
networks that consist of interleaved encoding circuit blocks and trainable
circuit blocks. First, we prove that single-qubit quantum neural networks can
approximate any univariate function by mapping the model to a partial Fourier
series. Beyond previous works' understanding of existence, we in particular
establish the exact correlations between the parameters of the trainable gates
and the working Fourier coefficients, by exploring connections to quantum
signal processing. Second, we discuss the limitations of single-qubit native
QNNs on approximating multivariate functions by analyzing the frequency
spectrum and the flexibility of Fourier coefficients. We further demonstrate
the expressivity and limitations of single-qubit native QNNs via numerical
experiments. As applications, we introduce natural extensions to multi-qubit
quantum neural networks, which exhibit the capability of classifying real-world
multi-dimensional data. We believe these results would improve our
understanding of QNNs and provide a helpful guideline for designing powerful
QNNs for machine learning tasks.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Enhancing the expressivity of quantum neural networks with residual
connections [0.0]
We propose a quantum circuit-based algorithm to implement quantum residual neural networks (QResNets)
Our work lays the foundation for a complete quantum implementation of the classical residual neural networks.
arXiv Detail & Related papers (2024-01-29T04:00:51Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - QTN-VQC: An End-to-End Learning framework for Quantum Neural Networks [71.14713348443465]
We introduce a trainable quantum tensor network (QTN) for quantum embedding on a variational quantum circuit (VQC)
QTN enables an end-to-end parametric model pipeline, namely QTN-VQC, from the generation of quantum embedding to the output measurement.
Our experiments on the MNIST dataset demonstrate the advantages of QTN for quantum embedding over other quantum embedding approaches.
arXiv Detail & Related papers (2021-10-06T14:44:51Z) - Exponentially Many Local Minima in Quantum Neural Networks [9.442139459221785]
Quantum Neural Networks (QNNs) are important quantum applications because of their similar promises as classical neural networks.
We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training.
We empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based circuits.
arXiv Detail & Related papers (2021-10-06T03:23:44Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Experimental Quantum Generative Adversarial Networks for Image
Generation [93.06926114985761]
We experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
Our work provides guidance for developing advanced quantum generative models on near-term quantum devices.
arXiv Detail & Related papers (2020-10-13T06:57:17Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.