Sequential Hamiltonian Assembly: Enhancing the training of combinatorial optimization problems on quantum computers
- URL: http://arxiv.org/abs/2408.04751v1
- Date: Thu, 8 Aug 2024 20:32:18 GMT
- Title: Sequential Hamiltonian Assembly: Enhancing the training of combinatorial optimization problems on quantum computers
- Authors: Navid Roshani, Jonas Stein, Maximilian Zorn, Michael Kölle, Philipp Altmann, Claudia Linnhoff-Popien,
- Abstract summary: A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
Much like in deep learning, vanishing gradients pose significant obstacles to the trainability of PQCs, arising from various sources.
We propose Sequential Hamiltonian Assembly (SHA) to address this issue and facilitate parameter training for quantum applications using global loss functions.
- Score: 4.385485960663339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs). Much like in deep learning, vanishing gradients pose significant obstacles to the trainability of PQCs, arising from various sources. One such source is the presence of non-local loss functions, which require the measurement of a large subset of qubits involved. To address this issue and facilitate parameter training for quantum applications using global loss functions, we propose Sequential Hamiltonian Assembly (SHA). SHA iteratively approximates the loss by assembling it from local components. To further demonstrate the feasibility of our approach, we extend our previous case study by introducing a new partitioning strategy, a new merger between QAOA and SHA, and an evaluation of SHA onto the Max-Cut optimization problem. Simulation results show that SHA outperforms conventional parameter training by 43.89% and the empirical state-of-the-art, Layer-VQE by 29.08% in the mean accuracy for Max-Cut. This paves the way for locality-aware learning techniques, mitigating vanishing gradients for a large class of practically relevant problems.
Related papers
- MG-Net: Learn to Customize QAOA with Circuit Depth Awareness [51.78425545377329]
Quantum Approximate Optimization Algorithm (QAOA) and its variants exhibit immense potential in tackling optimization challenges.
The requisite circuit depth for satisfactory performance is problem-specific and often exceeds the maximum capability of current quantum devices.
We introduce the Mixer Generator Network (MG-Net), a unified deep learning framework adept at dynamically formulating optimal mixer Hamiltonians.
arXiv Detail & Related papers (2024-09-27T12:28:18Z) - KANQAS: Kolmogorov-Arnold Network for Quantum Architecture Search [0.0]
We evaluate the practicality of Kolmogorov-Arnold Networks (KANs) in quantum state preparation and quantum chemistry.
In quantum state preparation, our results show that in a noiseless scenario, the probability of success and the number of optimal quantum circuit configurations to generate the multi-qubit maximally entangled states are $2$ to $5times$ higher than Multi-Layer perceptions (MLPs)
In tackling quantum chemistry problems, we enhance the recently proposed QAS algorithm by integrating Curriculum Reinforcement Learning (KAN) with a KAN structure instead of the traditional structure.
arXiv Detail & Related papers (2024-06-25T15:17:01Z) - ML-QLS: Multilevel Quantum Layout Synthesis [6.706813469929441]
We present ML-QLS, the first multilevel quantum layout tool with a scalable refinement operation integrated with novel cost functions and clustering strategies.
Our experimental results demonstrate that ML-QLS can scale up to problems involving hundreds of qubits and achieve a remarkable 52% performance improvement over leading QLS tools for large circuits.
arXiv Detail & Related papers (2024-05-28T17:10:20Z) - Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly [4.646930308096446]
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
We propose a Sequential Hamiltonian Assembly, which iteratively approximates the loss function using local components.
Our approach outperforms conventional parameter training by 29.99% and the empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
arXiv Detail & Related papers (2023-12-09T11:47:32Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - Calculating the ground state energy of benzene under spatial
deformations with noisy quantum computing [0.0]
We calculate the ground state energy of benzene under spatial deformations by using the variational quantum eigensolver (VQE)
By combining our advanced simulation platform with real quantum computers, we provided an analysis of how the noise, inherent to quantum computers, affects the results.
arXiv Detail & Related papers (2022-03-10T10:28:59Z) - Scaling Quantum Approximate Optimization on Near-term Hardware [49.94954584453379]
We quantify scaling of the expected resource requirements by optimized circuits for hardware architectures with varying levels of connectivity.
We show the number of measurements, and hence total time to synthesizing solution, grows exponentially in problem size and problem graph degree.
These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.
arXiv Detail & Related papers (2022-01-06T21:02:30Z) - Quantum circuit architecture search on a superconducting processor [56.04169357427682]
Variational quantum algorithms (VQAs) have shown strong evidences to gain provable computational advantages for diverse fields such as finance, machine learning, and chemistry.
However, the ansatz exploited in modern VQAs is incapable of balancing the tradeoff between expressivity and trainability.
We demonstrate the first proof-of-principle experiment of applying an efficient automatic ansatz design technique to enhance VQAs on an 8-qubit superconducting quantum processor.
arXiv Detail & Related papers (2022-01-04T01:53:42Z) - Mode connectivity in the loss landscape of parameterized quantum
circuits [1.7546369508217283]
Variational training of parameterized quantum circuits (PQCs) underpins many employed on near-term noisy intermediate scale quantum (NISQ) devices.
We adapt the qualitative loss landscape characterization for neural networks introduced in citegoodfellowqualitatively,li 2017visualizing and tests for connectivity used in citedraxler 2018essentially to study the loss landscape features in PQC training.
arXiv Detail & Related papers (2021-11-09T18:28:46Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - Characterizing the loss landscape of variational quantum circuits [77.34726150561087]
We introduce a way to compute the Hessian of the loss function of VQCs.
We show how this information can be interpreted and compared to classical neural networks.
arXiv Detail & Related papers (2020-08-06T17:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.