Toward Physically Realizable Quantum Neural Networks
- URL: http://arxiv.org/abs/2203.12092v1
- Date: Tue, 22 Mar 2022 23:03:32 GMT
- Title: Toward Physically Realizable Quantum Neural Networks
- Authors: Mohsen Heidari, Ananth Grama, Wojciech Szpankowski
- Abstract summary: Current solutions for quantum neural networks (QNNs) pose challenges concerning their scalability.
The exponential state space of QNNs poses challenges for the scalability of training procedures.
This paper presents a new model for QNNs that relies on band-limited Fourier expansions of transfer functions of quantum perceptrons.
- Score: 15.018259942339446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been significant recent interest in quantum neural networks (QNNs),
along with their applications in diverse domains. Current solutions for QNNs
pose significant challenges concerning their scalability, ensuring that the
postulates of quantum mechanics are satisfied and that the networks are
physically realizable. The exponential state space of QNNs poses challenges for
the scalability of training procedures. The no-cloning principle prohibits
making multiple copies of training samples, and the measurement postulates lead
to non-deterministic loss functions. Consequently, the physical realizability
and efficiency of existing approaches that rely on repeated measurement of
several copies of each sample for training QNNs are unclear. This paper
presents a new model for QNNs that relies on band-limited Fourier expansions of
transfer functions of quantum perceptrons (QPs) to design scalable training
procedures. This training procedure is augmented with a randomized quantum
stochastic gradient descent technique that eliminates the need for sample
replication. We show that this training procedure converges to the true minima
in expectation, even in the presence of non-determinism due to quantum
measurement. Our solution has a number of important benefits: (i) using QPs
with concentrated Fourier power spectrum, we show that the training procedure
for QNNs can be made scalable; (ii) it eliminates the need for resampling, thus
staying consistent with the no-cloning rule; and (iii) enhanced data efficiency
for the overall training process since each data sample is processed once per
epoch. We present a detailed theoretical foundation for our models and methods'
scalability, accuracy, and data efficiency. We also validate the utility of our
approach through a series of numerical experiments.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Exponentially Many Local Minima in Quantum Neural Networks [9.442139459221785]
Quantum Neural Networks (QNNs) are important quantum applications because of their similar promises as classical neural networks.
We conduct a quantitative investigation on the landscape of loss functions of QNNs and identify a class of simple yet extremely hard QNN instances for training.
We empirically confirm that our constructions can indeed be hard instances in practice with typical gradient-based circuits.
arXiv Detail & Related papers (2021-10-06T03:23:44Z) - Quantum Generative Training Using R\'enyi Divergences [0.22559617939136506]
Quantum neural networks (QNNs) are a framework for creating quantum algorithms.
A major challenge in QNN development is a concentration of measure phenomenon known as a barren plateau.
We show that an unbounded loss function can circumvent the existing no-go results.
arXiv Detail & Related papers (2021-06-17T14:50:53Z) - Quantum Federated Learning with Quantum Data [87.49715898878858]
Quantum machine learning (QML) has emerged as a promising field that leans on the developments in quantum computing to explore large complex machine learning problems.
This paper proposes the first fully quantum federated learning framework that can operate over quantum data and, thus, share the learning of quantum circuit parameters in a decentralized manner.
arXiv Detail & Related papers (2021-05-30T12:19:27Z) - Recurrence of Optimum for Training Weight and Activation Quantized
Networks [4.103701929881022]
Training deep learning models with low-precision weights and activations involves a demanding optimization task.
We show how to overcome the nature of network quantization.
We also show numerical evidence of the recurrence phenomenon of weight evolution in training quantized deep networks.
arXiv Detail & Related papers (2020-12-10T09:14:43Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.