Mode connectivity in the loss landscape of parameterized quantum
circuits
- URL: http://arxiv.org/abs/2111.05311v1
- Date: Tue, 9 Nov 2021 18:28:46 GMT
- Title: Mode connectivity in the loss landscape of parameterized quantum
circuits
- Authors: Kathleen E. Hamilton and Emily Lynn and Raphael C. Pooser
- Abstract summary: Variational training of parameterized quantum circuits (PQCs) underpins many employed on near-term noisy intermediate scale quantum (NISQ) devices.
We adapt the qualitative loss landscape characterization for neural networks introduced in citegoodfellowqualitatively,li 2017visualizing and tests for connectivity used in citedraxler 2018essentially to study the loss landscape features in PQC training.
- Score: 1.7546369508217283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational training of parameterized quantum circuits (PQCs) underpins many
workflows employed on near-term noisy intermediate scale quantum (NISQ)
devices. It is a hybrid quantum-classical approach that minimizes an associated
cost function in order to train a parameterized ansatz. In this paper we adapt
the qualitative loss landscape characterization for neural networks introduced
in \cite{goodfellow2014qualitatively,li2017visualizing} and tests for
connectivity used in \cite{draxler2018essentially} to study the loss landscape
features in PQC training. We present results for PQCs trained on a simple
regression task, using the bilayer circuit ansatz, which consists of
alternating layers of parameterized rotation gates and entangling gates.
Multiple circuits are trained with $3$ different batch gradient optimizers:
stochastic gradient descent, the quantum natural gradient, and Adam. We
identify large features in the landscape that can lead to faster convergence in
training workflows.
Related papers
- Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly [4.646930308096446]
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
We propose a Sequential Hamiltonian Assembly, which iteratively approximates the loss function using local components.
Our approach outperforms conventional parameter training by 29.99% and the empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
arXiv Detail & Related papers (2023-12-09T11:47:32Z) - Real-time error mitigation for variational optimization on quantum
hardware [45.935798913942904]
We define a Real Time Quantum Error Mitigation (RTQEM) algorithm to assist in fitting functions on quantum chips with VQCs.
Our RTQEM routine can enhance VQCs' trainability by reducing the corruption of the loss function.
arXiv Detail & Related papers (2023-11-09T19:00:01Z) - Backpropagation scaling in parameterised quantum circuits [0.0]
We introduce circuits that are not known to be classically simulable and admit gradient estimation with significantly fewer circuits.
Specifically, these circuits allow for fast estimation of the gradient, higher order partial derivatives and the Fisher information matrix.
In a toy classification problem on 16 qubits, such circuits show competitive performance with other methods, while reducing the training cost by about two orders of magnitude.
arXiv Detail & Related papers (2023-06-26T18:00:09Z) - Quantum Federated Learning with Entanglement Controlled Circuits and
Superposition Coding [44.89303833148191]
We develop a depth-controllable architecture of entangled slimmable quantum neural networks (eSQNNs)
We propose an entangled slimmable QFL (eSQFL) that communicates the superposition-coded parameters of eS-QNNs.
In an image classification task, extensive simulations corroborate the effectiveness of eSQFL.
arXiv Detail & Related papers (2022-12-04T03:18:03Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - A Statistical Framework for Low-bitwidth Training of Deep Neural
Networks [70.77754244060384]
Fully quantized training (FQT) uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model.
One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties.
arXiv Detail & Related papers (2020-10-27T13:57:33Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z) - Layerwise learning for quantum neural networks [7.2237324920669055]
We show a layerwise learning strategy for parametrized quantum circuits.
The circuit depth is incrementally grown during optimization, and only subsets of parameters are updated in each training step.
We demonstrate our approach on an image-classification task on handwritten digits, and show that layerwise learning attains an 8% lower generalization error on average.
arXiv Detail & Related papers (2020-06-26T10:44:46Z) - Large gradients via correlation in random parameterized quantum circuits [0.0]
The presence of exponentially vanishing gradients in cost function landscapes is an obstacle to optimization by gradient descent methods.
We prove that reducing the dimensionality of the parameter space can allow one to circumvent the vanishing gradient phenomenon.
arXiv Detail & Related papers (2020-05-25T16:15:53Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.