Layerwise learning for quantum neural networks
- URL: http://arxiv.org/abs/2006.14904v1
- Date: Fri, 26 Jun 2020 10:44:46 GMT
- Title: Layerwise learning for quantum neural networks
- Authors: Andrea Skolik, Jarrod R. McClean, Masoud Mohseni, Patrick van der
Smagt, Martin Leib
- Abstract summary: We show a layerwise learning strategy for parametrized quantum circuits.
The circuit depth is incrementally grown during optimization, and only subsets of parameters are updated in each training step.
We demonstrate our approach on an image-classification task on handwritten digits, and show that layerwise learning attains an 8% lower generalization error on average.
- Score: 7.2237324920669055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increased focus on quantum circuit learning for near-term
applications on quantum devices, in conjunction with unique challenges
presented by cost function landscapes of parametrized quantum circuits,
strategies for effective training are becoming increasingly important. In order
to ameliorate some of these challenges, we investigate a layerwise learning
strategy for parametrized quantum circuits. The circuit depth is incrementally
grown during optimization, and only subsets of parameters are updated in each
training step. We show that when considering sampling noise, this strategy can
help avoid the problem of barren plateaus of the error surface due to the low
depth of circuits, low number of parameters trained in one step, and larger
magnitude of gradients compared to training the full circuit. These properties
make our algorithm preferable for execution on noisy intermediate-scale quantum
devices. We demonstrate our approach on an image-classification task on
handwritten digits, and show that layerwise learning attains an 8% lower
generalization error on average in comparison to standard learning schemes for
training quantum circuits of the same size. Additionally, the percentage of
runs that reach lower test errors is up to 40% larger compared to training the
full circuit, which is susceptible to creeping onto a plateau during training.
Related papers
- Diffusion-Inspired Quantum Noise Mitigation in Parameterized Quantum Circuits [10.073911279652918]
We study the relationship between the quantum noise and the diffusion model.
We propose a novel diffusion-inspired learning approach to mitigate the quantum noise in the PQCs.
arXiv Detail & Related papers (2024-06-02T19:35:38Z) - A Study on Optimization Techniques for Variational Quantum Circuits in Reinforcement Learning [2.7504809152812695]
Researchers are focusing on variational quantum circuits (VQCs)
VQCs are hybrid algorithms that merge a quantum circuit, which can be adjusted through parameters.
Recent studies have presented new ways of applying VQCs to reinforcement learning.
arXiv Detail & Related papers (2024-05-20T20:06:42Z) - QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum
Circuits [82.50620782471485]
QuantumSEA is an in-time sparse exploration for noise-adaptive quantum circuits.
It aims to achieve two key objectives: (1) implicit circuits capacity during training and (2) noise robustness.
Our method establishes state-of-the-art results with only half the number of quantum gates and 2x time saving of circuit executions.
arXiv Detail & Related papers (2024-01-10T22:33:00Z) - Backpropagation scaling in parameterised quantum circuits [0.0]
We introduce circuits that are not known to be classically simulable and admit gradient estimation with significantly fewer circuits.
Specifically, these circuits allow for fast estimation of the gradient, higher order partial derivatives and the Fisher information matrix.
In a toy classification problem on 16 qubits, such circuits show competitive performance with other methods, while reducing the training cost by about two orders of magnitude.
arXiv Detail & Related papers (2023-06-26T18:00:09Z) - Quantum circuit debugging and sensitivity analysis via local inversions [62.997667081978825]
We present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most.
We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
arXiv Detail & Related papers (2022-04-12T19:39:31Z) - Mode connectivity in the loss landscape of parameterized quantum
circuits [1.7546369508217283]
Variational training of parameterized quantum circuits (PQCs) underpins many employed on near-term noisy intermediate scale quantum (NISQ) devices.
We adapt the qualitative loss landscape characterization for neural networks introduced in citegoodfellowqualitatively,li 2017visualizing and tests for connectivity used in citedraxler 2018essentially to study the loss landscape features in PQC training.
arXiv Detail & Related papers (2021-11-09T18:28:46Z) - Variational Quantum Optimization with Multi-Basis Encodings [62.72309460291971]
We introduce a new variational quantum algorithm that benefits from two innovations: multi-basis graph complexity and nonlinear activation functions.
Our results in increased optimization performance, two increase in effective landscapes and a reduction in measurement progress.
arXiv Detail & Related papers (2021-06-24T20:16:02Z) - Quantum error reduction with deep neural network applied at the
post-processing stage [0.0]
We propose a method for digital quantum simulation characterized by the periodic structure of quantum circuits consisting of Trotter steps.
A key ingredient of our approach is that it does not require any data from a classical simulator at the training stage.
The network is trained to transform data obtained from quantum hardware with artificially increased Trotter steps number.
arXiv Detail & Related papers (2021-05-17T13:04:26Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.