Expressibility-Enhancing Strategies for Quantum Neural Networks
- URL: http://arxiv.org/abs/2211.12670v2
- Date: Tue, 16 May 2023 04:05:25 GMT
- Title: Expressibility-Enhancing Strategies for Quantum Neural Networks
- Authors: Yalin Liao, Junpeng Zhan
- Abstract summary: Quantum neural networks (QNNs) can be trained to map input data to predictions.
Much work has focused on theoretically analyzing the expressive power of QNNs.
We propose four expressibility-enhancing strategies for QNNs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum neural networks (QNNs), represented by parameterized quantum
circuits, can be trained in the paradigm of supervised learning to map input
data to predictions. Much work has focused on theoretically analyzing the
expressive power of QNNs. However, in almost all literature, QNNs' expressive
power is numerically validated using only simple univariate functions. We
surprisingly discover that state-of-the-art QNNs with strong expressive power
can have poor performance in approximating even just a simple sinusoidal
function. To fill the gap, we propose four expressibility-enhancing strategies
for QNNs: Sinusoidal-friendly embedding, redundant measurement,
post-measurement function, and random training data. We analyze the
effectiveness of these strategies via mathematical analysis and/or numerical
studies including learning complex sinusoidal-based functions. Our results from
comparative experiments validate that the four strategies can significantly
increase the QNNs' performance in approximating complex multivariable functions
and reduce the quantum circuit depth and qubits required.
Related papers
- From Graphs to Qubits: A Critical Review of Quantum Graph Neural Networks [56.51893966016221]
Quantum Graph Neural Networks (QGNNs) represent a novel fusion of quantum computing and Graph Neural Networks (GNNs)
This paper critically reviews the state-of-the-art in QGNNs, exploring various architectures.
We discuss their applications across diverse fields such as high-energy physics, molecular chemistry, finance and earth sciences, highlighting the potential for quantum advantage.
arXiv Detail & Related papers (2024-08-12T22:53:14Z) - Exploiting the equivalence between quantum neural networks and perceptrons [2.598133279943607]
Quantum machine learning models based on parametrized quantum circuits are considered to be among the most promising candidates for applications on quantum devices.
We explore the expressivity and inductive bias of QNNs by exploiting an exact mapping from QNNs with inputs $x$ to classical perceptrons acting on $x otimes x$.
arXiv Detail & Related papers (2024-07-05T09:19:58Z) - Statistical Analysis of Quantum State Learning Process in Quantum Neural
Networks [4.852613028421959]
Quantum neural networks (QNNs) have been a promising framework in pursuing near-term quantum advantage.
We develop a no-go theorem for learning an unknown quantum state with QNNs even starting from a high-fidelity initial state.
arXiv Detail & Related papers (2023-09-26T14:54:50Z) - Predicting Expressibility of Parameterized Quantum Circuits using Graph
Neural Network [5.444441239596186]
We propose a novel method based on Graph Neural Networks (GNNs) for predicting the expressibility of Quantum Circuits (PQCs)
By leveraging the graph-based representation of PQCs, our GNN-based model captures intricate relationships between circuit parameters and their resulting expressibility.
Experimental evaluation on a four thousand random PQC dataset and IBM Qiskit's hardware efficient ansatz sets demonstrates the superior performance of our approach.
arXiv Detail & Related papers (2023-09-13T14:08:01Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Evaluating the performance of sigmoid quantum perceptrons in quantum
neural networks [0.0]
Quantum neural networks (QNN) have been proposed as a promising architecture for quantum machine learning.
One candidate is quantum perceptrons designed to emulate the nonlinear activation functions of classical perceptrons.
We critically investigate both the capabilities and performance of SQP networks by computing their effective dimension and effective capacity.
arXiv Detail & Related papers (2022-08-12T10:08:11Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.