Are Quantum Circuits Better than Neural Networks at Learning
Multi-dimensional Discrete Data? An Investigation into Practical Quantum
Circuit Generative Models
- URL: http://arxiv.org/abs/2212.06380v1
- Date: Tue, 13 Dec 2022 05:31:31 GMT
- Title: Are Quantum Circuits Better than Neural Networks at Learning
Multi-dimensional Discrete Data? An Investigation into Practical Quantum
Circuit Generative Models
- Authors: Pengyuan Zhai
- Abstract summary: We show that multi-layer parameterized quantum circuits (MPQCs) are more expressive than classical neural networks (NNs)
We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically.
We address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the gradient (quantum) and how to alleviate modal collapse.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Are multi-layer parameterized quantum circuits (MPQCs) more expressive than
classical neural networks (NNs)? How, why, and in what aspects? In this work,
we survey and develop intuitive insights into the expressive power of MPQCs in
relation to classical NNs. We organize available sources into a systematic
proof on why MPQCs are able to generate probability distributions that cannot
be efficiently simulated classically. We first show that instantaneous quantum
polynomial circuits (IQPCs), are unlikely to be simulated classically to within
a multiplicative error, and then show that MPQCs efficiently generalize IQPCs.
We support the surveyed claims with numerical simulations: with the MPQC as the
core architecture, we build different versions of quantum generative models to
learn a given multi-dimensional, multi-modal discrete data distribution, and
show their superior performances over a classical Generative Adversarial
Network (GAN) equipped with the Gumbel Softmax for generating discrete data. In
addition, we address practical issues such as how to efficiently train a
quantum circuit with only limited samples, how to efficiently calculate the
(quantum) gradient, and how to alleviate modal collapse. We propose and
experimentally verify an efficient training-and-fine-tuning scheme for lowering
the output noise and decreasing modal collapse. As an original contribution, we
develop a novel loss function (MCR loss) inspired by an information-theoretical
measure -- the coding rate reduction metric, which has a more expressive and
geometrically meaningful latent space representations -- beneficial for both
model selection and alleviating modal collapse. We derive the gradient of our
MCR loss with respect to the circuit parameters under two settings: with the
radial basis function (RBF) kernel and with a NN discriminator and conduct
experiments to showcase its effectiveness.
Related papers
- Efficient Classical Computation of Single-Qubit Marginal Measurement Probabilities to Simulate Certain Classes of Quantum Algorithms [0.0]
We introduce a novel CNOT "functional" that leverages neural networks to generate unitary transformations.
For random circuit simulations, our modified QC-DFT enables efficient computation of single-qubit marginal measurement probabilities.
arXiv Detail & Related papers (2024-11-11T09:30:33Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction [3.1690235522182104]
Emulation arises as the main near-term alternative to explore the potential of QRNNs.
We show how the present and past information from a time series is transmitted through the circuit.
We derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters.
arXiv Detail & Related papers (2023-10-31T17:32:11Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Sample-efficient Quantum Born Machine through Coding Rate Reduction [0.0]
The quantum circuit Born machine (QCBM) is a quantum physics inspired implicit generative model naturally suitable for learning binary images.
We show that matching up to the second moment alone is not sufficient for training the quantum generator, but when combined with the class probability estimation loss, MCR$2$ is able to resist mode collapse.
arXiv Detail & Related papers (2022-11-14T06:21:26Z) - Power and limitations of single-qubit native quantum neural networks [5.526775342940154]
Quantum neural networks (QNNs) have emerged as a leading strategy to establish applications in machine learning, chemistry, and optimization.
We formulate a theoretical framework for the expressive ability of data re-uploading quantum neural networks.
arXiv Detail & Related papers (2022-05-16T17:58:27Z) - Error mitigation and quantum-assisted simulation in the error corrected
regime [77.34726150561087]
A standard approach to quantum computing is based on the idea of promoting a classically simulable and fault-tolerant set of operations.
We show how the addition of noisy magic resources allows one to boost classical quasiprobability simulations of a quantum circuit.
arXiv Detail & Related papers (2021-03-12T20:58:41Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.