Sample-efficient Quantum Born Machine through Coding Rate Reduction
- URL: http://arxiv.org/abs/2211.10418v1
- Date: Mon, 14 Nov 2022 06:21:26 GMT
- Title: Sample-efficient Quantum Born Machine through Coding Rate Reduction
- Authors: Pengyuan Zhai
- Abstract summary: The quantum circuit Born machine (QCBM) is a quantum physics inspired implicit generative model naturally suitable for learning binary images.
We show that matching up to the second moment alone is not sufficient for training the quantum generator, but when combined with the class probability estimation loss, MCR$2$ is able to resist mode collapse.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quantum circuit Born machine (QCBM) is a quantum physics inspired
implicit generative model naturally suitable for learning binary images, with a
potential advantage of modeling discrete distributions that are hard to
simulate classically. As data samples are generated quantum-mechanically, QCBMs
encompass a unique optimization landscape. However, pioneering works on QCBMs
do not consider the practical scenario where only small batch sizes are allowed
during training. QCBMs trained with a statistical two-sample test objective in
the image space require large amounts of projective measurements to approximate
the model distribution well, unpractical for large-scale quantum systems due to
the exponential scaling of the probability space. QCBMs trained adversarially
against a deep neural network discriminator are proof-of-concept models that
face mode collapse. In this work we investigate practical learning of QCBMs. We
use the information-theoretic \textit{Maximal Coding Rate Reduction} (MCR$^2$)
metric as a second moment matching tool and study its effect on mode collapse
in QCBMs. We compute the sampling based gradient of MCR$^2$ with respect to
quantum circuit parameters with or without an explicit feature mapping. We
experimentally show that matching up to the second moment alone is not
sufficient for training the quantum generator, but when combined with the class
probability estimation loss, MCR$^2$ is able to resist mode collapse. In
addition, we show that adversarially trained neural network kernel for infinite
moment matching is also effective against mode collapse. On the Bars and
Stripes dataset, our proposed techniques alleviate mode collapse to a larger
degree than previous QCBM training schemes, moving one step closer towards
practicality and scalability.
Related papers
- Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - An error-mitigated photonic quantum circuit Born machine [0.0]
Generative machine learning models aim to learn the underlying distribution of the data in order to generate new samples.
Quantum circuit Born machines (QCBMs) are a popular choice of quantum generative models which can be implemented on shallow circuits.
We show that a new error mitigation technique, called recycling mitigation, greatly improves the training of QCBMs in realistic scenarios with photon loss.
arXiv Detail & Related papers (2024-05-03T17:53:15Z) - Hybrid quantum transfer learning for crack image classification on NISQ
hardware [62.997667081978825]
We present an application of quantum transfer learning for detecting cracks in gray value images.
We compare the performance and training time of PennyLane's standard qubits with IBM's qasm_simulator and real backends.
arXiv Detail & Related papers (2023-07-31T14:45:29Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Are Quantum Circuits Better than Neural Networks at Learning
Multi-dimensional Discrete Data? An Investigation into Practical Quantum
Circuit Generative Models [0.0]
We show that multi-layer parameterized quantum circuits (MPQCs) are more expressive than classical neural networks (NNs)
We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically.
We address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the gradient (quantum) and how to alleviate modal collapse.
arXiv Detail & Related papers (2022-12-13T05:31:31Z) - QSAN: A Near-term Achievable Quantum Self-Attention Network [73.15524926159702]
Self-Attention Mechanism (SAM) is good at capturing the internal connections of features.
A novel Quantum Self-Attention Network (QSAN) is proposed for image classification tasks on near-term quantum devices.
arXiv Detail & Related papers (2022-07-14T12:22:51Z) - Introducing Non-Linearity into Quantum Generative Models [0.0]
We introduce a model that adds non-linear activations via a neural network structure onto the standard Born Machine framework.
We compare our non-linear QNBM to the linear Quantum Circuit Born Machine.
We show that while both models can easily learn a trivial uniform probability distribution, the QNBM achieves an almost 3x smaller error rate than a QCBM.
arXiv Detail & Related papers (2022-05-28T18:59:49Z) - Quantum Generative Training Using R\'enyi Divergences [0.22559617939136506]
Quantum neural networks (QNNs) are a framework for creating quantum algorithms.
A major challenge in QNN development is a concentration of measure phenomenon known as a barren plateau.
We show that an unbounded loss function can circumvent the existing no-go results.
arXiv Detail & Related papers (2021-06-17T14:50:53Z) - Error mitigation and quantum-assisted simulation in the error corrected
regime [77.34726150561087]
A standard approach to quantum computing is based on the idea of promoting a classically simulable and fault-tolerant set of operations.
We show how the addition of noisy magic resources allows one to boost classical quasiprobability simulations of a quantum circuit.
arXiv Detail & Related papers (2021-03-12T20:58:41Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.