Quantum Generative Training Using R\'enyi Divergences
- URL: http://arxiv.org/abs/2106.09567v1
- Date: Thu, 17 Jun 2021 14:50:53 GMT
- Title: Quantum Generative Training Using R\'enyi Divergences
- Authors: Maria Kieferova and Ortiz Marrero Carlos and Nathan Wiebe
- Abstract summary: Quantum neural networks (QNNs) are a framework for creating quantum algorithms.
A major challenge in QNN development is a concentration of measure phenomenon known as a barren plateau.
We show that an unbounded loss function can circumvent the existing no-go results.
- Score: 0.22559617939136506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural networks (QNNs) are a framework for creating quantum
algorithms that promises to combine the speedups of quantum computation with
the widespread successes of machine learning. A major challenge in QNN
development is a concentration of measure phenomenon known as a barren plateau
that leads to exponentially small gradients for a range of QNNs models. In this
work, we examine the assumptions that give rise to barren plateaus and show
that an unbounded loss function can circumvent the existing no-go results. We
propose a training algorithm that minimizes the maximal R\'enyi divergence of
order two and present techniques for gradient computation. We compute the
closed form of the gradients for Unitary QNNs and Quantum Boltzmann Machines
and provide sufficient conditions for the absence of barren plateaus in these
models. We demonstrate our approach in two use cases: thermal state learning
and Hamiltonian learning. In our numerical experiments, we observed rapid
convergence of our training loss function and frequently archived a $99\%$
average fidelity in fewer than $100$ epochs.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Projected Stochastic Gradient Descent with Quantum Annealed Binary Gradients [51.82488018573326]
We present QP-SBGD, a novel layer-wise optimiser tailored towards training neural networks with binary weights.
BNNs reduce the computational requirements and energy consumption of deep learning models with minimal loss in accuracy.
Our algorithm is implemented layer-wise, making it suitable to train larger networks on resource-limited quantum hardware.
arXiv Detail & Related papers (2023-10-23T17:32:38Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Toward Physically Realizable Quantum Neural Networks [15.018259942339446]
Current solutions for quantum neural networks (QNNs) pose challenges concerning their scalability.
The exponential state space of QNNs poses challenges for the scalability of training procedures.
This paper presents a new model for QNNs that relies on band-limited Fourier expansions of transfer functions of quantum perceptrons.
arXiv Detail & Related papers (2022-03-22T23:03:32Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Absence of Barren Plateaus in Quantum Convolutional Neural Networks [0.0]
Quantum Convolutional Neural Networks (QCNNs) have been proposed.
We rigorously analyze the gradient scaling for the parameters in the QCNN architecture.
arXiv Detail & Related papers (2020-11-05T16:46:13Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Trainability of Dissipative Perceptron-Based Quantum Neural Networks [0.8258451067861933]
We analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs)
We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits.
arXiv Detail & Related papers (2020-05-26T00:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.