A universal duplication-free quantum neural network
- URL: http://arxiv.org/abs/2106.13211v2
- Date: Wed, 20 Oct 2021 09:25:13 GMT
- Title: A universal duplication-free quantum neural network
- Authors: Xiaokai Hou, Guanyu Zhou, Qingyu Li, Shan Jin, Xiaoting Wang
- Abstract summary: We propose a new QNN model that harbors without the need of multiple state-duplications.
We find that our model requires significantly fewer qubits and it outperforms the other two in terms of accuracy and relative error.
- Score: 0.8399688944263843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Universality of neural networks describes the ability to approximate
arbitrary function, and is a key ingredient to keep the method effective. The
established models for universal quantum neural networks(QNN), however, require
the preparation of multiple copies of the same quantum state to generate the
nonlinearity, with the copy number increasing significantly for highly
oscillating functions, resulting in a huge demand for a large-scale quantum
processor. To address this problem, we propose a new QNN model that harbors
universality without the need of multiple state-duplications, and is more
likely to get implemented on near-term devices. To demonstrate the
effectiveness, we compare our proposal with two popular QNN models in solving
typical supervised learning problems. We find that our model requires
significantly fewer qubits and it outperforms the other two in terms of
accuracy and relative error.
Related papers
- A Quantum Leaky Integrate-and-Fire Spiking Neuron and Network [0.0]
We introduce a new software model for quantum neuromorphic computing.
We use these neurons as building blocks in the construction of a quantum spiking neural network (QSNN) and a quantum spiking convolutional neural network (QSCNN)
arXiv Detail & Related papers (2024-07-23T11:38:06Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Multi-Scale Feature Fusion Quantum Depthwise Convolutional Neural Networks for Text Classification [3.0079490585515343]
We propose a novel quantum neural network (QNN) model based on quantum convolution.
We develop the quantum depthwise convolution that significantly reduces the number of parameters and lowers computational complexity.
We also introduce the multi-scale feature fusion mechanism to enhance model performance by integrating word-level and sentence-level features.
arXiv Detail & Related papers (2024-05-22T10:19:34Z) - Coherent Feed Forward Quantum Neural Network [2.1178416840822027]
Quantum machine learning, focusing on quantum neural networks (QNNs), remains a vastly uncharted field of study.
We introduce a bona fide QNN model, which seamlessly aligns with the versatility of a traditional FFNN in terms of its adaptable intermediate layers and nodes.
We test our proposed model on various benchmarking datasets such as the diagnostic breast cancer (Wisconsin) and credit card fraud detection datasets.
arXiv Detail & Related papers (2024-02-01T15:13:26Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - A duplication-free quantum neural network for universal approximation [0.8399688944263843]
universality of a quantum neural network refers to its ability to approximate arbitrary functions.
We propose a simple design of a duplication-free quantum neural network whose universality can be rigorously proved.
arXiv Detail & Related papers (2022-11-21T07:43:32Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.