Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly
- URL: http://arxiv.org/abs/2312.05552v1
- Date: Sat, 9 Dec 2023 11:47:32 GMT
- Title: Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly
- Authors: Jonas Stein, Navid Roshani, Maximilian Zorn, Philipp Altmann, Michael
K\"olle, Claudia Linnhoff-Popien
- Abstract summary: A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
We propose a Sequential Hamiltonian Assembly, which iteratively approximates the loss function using local components.
Our approach outperforms conventional parameter training by 29.99% and the empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
- Score: 4.646930308096446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A central challenge in quantum machine learning is the design and training of
parameterized quantum circuits (PQCs). Similar to deep learning, vanishing
gradients pose immense problems in the trainability of PQCs, which have been
shown to arise from a multitude of sources. One such cause are non-local loss
functions, that demand the measurement of a large subset of involved qubits. To
facilitate the parameter training for quantum applications using global loss
functions, we propose a Sequential Hamiltonian Assembly, which iteratively
approximates the loss function using local components. Aiming for a prove of
principle, we evaluate our approach using Graph Coloring problem with a
Varational Quantum Eigensolver (VQE). Simulation results show, that our
approach outperforms conventional parameter training by 29.99% and the
empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
This paves the way towards locality-aware learning techniques, allowing to
evade vanishing gradients for a large class of practically relevant problems.
Related papers
- Sequential Hamiltonian Assembly: Enhancing the training of combinatorial optimization problems on quantum computers [4.385485960663339]
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
Much like in deep learning, vanishing gradients pose significant obstacles to the trainability of PQCs, arising from various sources.
We propose Sequential Hamiltonian Assembly (SHA) to address this issue and facilitate parameter training for quantum applications using global loss functions.
arXiv Detail & Related papers (2024-08-08T20:32:18Z) - Alleviating Barren Plateaus in Parameterized Quantum Machine Learning
Circuits: Investigating Advanced Parameter Initialization Strategies [4.169604865257672]
When with random parameter values, quantum quantum circuits (PQCs) often exhibit barren plateaus (BP)
BPs are vanishing gradients with an increasing number of qubits, hinder optimization in quantum algorithms.
In this paper, we analyze the impact of state-of-the-art parameter initialization strategies from classical machine learning in random PQCs.
arXiv Detail & Related papers (2023-11-22T08:07:53Z) - Solving Oscillation Problem in Post-Training Quantization Through a
Theoretical Perspective [74.48124653728422]
Post-training quantization (PTQ) is widely regarded as one of the most efficient compression methods practically.
We argue that an overlooked problem of oscillation is in the PTQ methods.
arXiv Detail & Related papers (2023-03-21T14:52:52Z) - End-to-end resource analysis for quantum interior point methods and portfolio optimization [63.4863637315163]
We provide a complete quantum circuit-level description of the algorithm from problem input to problem output.
We report the number of logical qubits and the quantity/depth of non-Clifford T-gates needed to run the algorithm.
arXiv Detail & Related papers (2022-11-22T18:54:48Z) - Quantum circuit architecture search on a superconducting processor [56.04169357427682]
Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry.
However, the ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability.
We demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique to enhance VQAs on an 8-qubit superconducting quantum processor.
arXiv Detail & Related papers (2022-01-04T01:53:42Z) - Mode connectivity in the loss landscape of parameterized quantum
circuits [1.7546369508217283]
Variational training of parameterized quantum circuits (PQCs) underpins many employed on near-term noisy intermediate scale quantum (NISQ) devices.
We adapt the qualitative loss landscape characterization for neural networks introduced in citegoodfellowqualitatively,li 2017visualizing and tests for connectivity used in citedraxler 2018essentially to study the loss landscape features in PQC training.
arXiv Detail & Related papers (2021-11-09T18:28:46Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - Gradient-free quantum optimization on NISQ devices [0.0]
We consider recent advances in weight-agnostic learning and propose a strategy that addresses the trade-off between finding appropriate circuit architectures and parameter tuning.
We investigate the use of NEAT-inspired algorithms which evaluate circuits via genetic competition and thus circumvent issues due to exceeding numbers of parameters.
arXiv Detail & Related papers (2020-12-23T10:24:54Z) - A Statistical Framework for Low-bitwidth Training of Deep Neural
Networks [70.77754244060384]
Fully quantized training (FQT) uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model.
One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties.
arXiv Detail & Related papers (2020-10-27T13:57:33Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.