Slice-Wise Initial State Optimization to Improve Cost and Accuracy of the VQE on Lattice Models
- URL: http://arxiv.org/abs/2509.13034v1
- Date: Tue, 16 Sep 2025 12:52:23 GMT
- Title: Slice-Wise Initial State Optimization to Improve Cost and Accuracy of the VQE on Lattice Models
- Authors: Cedric Gaberle, Manpreet Singh Jattana,
- Abstract summary: We propose an optimization method for the Variational Quantum Eigensolver (VQE) that combines adaptive and physics-inspired ansatz design.<n>This quasi-dynamical approach preserves expressivity and hardware efficiency while avoiding the overhead of operator selection.<n> Benchmarks on one- and two-dimensional Heisenberg and Hubbard models with up to 20 qubits show improved fidelities, reduced function evaluations, or both, compared to fixed-layer VQE.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an optimization method for the Variational Quantum Eigensolver (VQE) that combines adaptive and physics-inspired ansatz design. Instead of optimizing multiple layers simultaneously, the ansatz is built incrementally from its operator subsets, enabling subspace optimization that provides better initialization for subsequent steps. This quasi-dynamical approach preserves expressivity and hardware efficiency while avoiding the overhead of operator selection associated with adaptive methods. Benchmarks on one- and two-dimensional Heisenberg and Hubbard models with up to 20 qubits show improved fidelities, reduced function evaluations, or both, compared to fixed-layer VQE. The method is simple, cost-effective, and particularly well-suited for current noisy intermediate-scale quantum (NISQ) devices.
Related papers
- Benchmarking Lie-Algebraic Pretraining and Non-Variational QWOA for the MaxCut Problem [4.103893081207555]
This paper provides a comparative performance analysis of two strategies designed to improve trainability.<n>We benchmark both methods on the unweighted Maxcut problem using a circuit depth of $p = 256$ across 200 Erds-Rényi and 200 3-regular graphs.<n>Both approaches significantly improve upon the standard randomly QWOA. NV-QWOA attains a mean approximation ratio of 98.9% in just 60 iterations, while the Lie-algebraic pretrained QWOA improves to 77.71% after 500 iterations.
arXiv Detail & Related papers (2025-12-28T09:42:02Z) - Beyond Outliers: A Study of Optimizers Under Quantization [82.75879062804955]
We study impact of choice on model robustness under quantization.<n>We evaluate how model performance degrades when trained with different baselines.<n>We derive scaling laws for quantization-aware training under different parameters.
arXiv Detail & Related papers (2025-09-27T21:15:22Z) - QUBO-based training for VQAs on Quantum Annealers [0.06372261626436675]
Quantum annealers provide an effective framework for solving large-scale optimization problems.<n>This work presents a novel methodology for training Variational Quantum Algorithms.
arXiv Detail & Related papers (2025-09-01T22:57:49Z) - Scalable Min-Max Optimization via Primal-Dual Exact Pareto Optimization [66.51747366239299]
We propose a smooth variant of the min-max problem based on the augmented Lagrangian.<n>The proposed algorithm scales better with the number of objectives than subgradient-based strategies.
arXiv Detail & Related papers (2025-03-16T11:05:51Z) - Memory-Efficient Optimization with Factorized Hamiltonian Descent [11.01832755213396]
We introduce a novel adaptive, H-Fac, which incorporates a memory-efficient factorization approach to address this challenge.
By employing a rank-1 parameterization for both momentum and scaling parameter estimators, H-Fac reduces memory costs to a sublinear level.
We develop our algorithms based on principles derived from Hamiltonian dynamics, providing robust theoretical underpinnings in optimization dynamics and convergence guarantees.
arXiv Detail & Related papers (2024-06-14T12:05:17Z) - Reducing measurement costs by recycling the Hessian in adaptive variational quantum algorithms [0.0]
We propose an improved quasi-Newton optimization protocol specifically tailored to adaptive VQAs.
We implement a quasi-Newton algorithm where an approximation to the inverse Hessian matrix is continuously built and grown across the iterations of an adaptive VQA.
arXiv Detail & Related papers (2024-01-10T14:08:04Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Quantum approximate optimization via learning-based adaptive
optimization [5.399532145408153]
Quantum approximate optimization algorithm (QAOA) is designed to solve objective optimization problems.
Our results demonstrate that the algorithm greatly outperforms conventional approximations in terms of speed, accuracy, efficiency and stability.
This work helps to unlock the full power of QAOA and paves the way toward achieving quantum advantage in practical classical tasks.
arXiv Detail & Related papers (2023-03-27T02:14:56Z) - Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization [108.35402316802765]
We propose a new first-order optimization algorithm -- AcceleratedGradient-OptimisticGradient (AG-OG) Ascent.
We show that AG-OG achieves the optimal convergence rate (up to a constant) for a variety of settings.
We extend our algorithm to extend the setting and achieve the optimal convergence rate in both bi-SC-SC and bi-C-SC settings.
arXiv Detail & Related papers (2022-10-31T17:59:29Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.