Pruned-ADAPT-VQE: compacting molecular ansätze by removing irrelevant operators
- URL: http://arxiv.org/abs/2504.04652v1
- Date: Mon, 07 Apr 2025 00:54:31 GMT
- Title: Pruned-ADAPT-VQE: compacting molecular ansätze by removing irrelevant operators
- Authors: Nonia Vaquero-Sabater, Abel Carreras, David Casanova,
- Abstract summary: ADAPT-VQE is a derivative-assembled pseudo-Trotter variational quantum eigensolver.<n>It selects operators based on their gradient, constructing ans"atze that continuously evolve to match the energy landscape.<n>We propose an automated, cost-free refinement method that removes unnecessary operators from the ansatz.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adaptive derivative-assembled pseudo-Trotter variational quantum eigensolver (ADAPT-VQE) is one of the most widely used algorithms for electronic structure calculations. It adaptively selects operators based on their gradient, constructing ans\"atze that continuously evolve to match the energy landscape, helping avoid local traps and barren plateaus. However, this flexibility in reoptimization can lead to the inclusion of redundant or inefficient operators that have almost zero amplitude, barely contributing to the ansatz. We identify three phenomena responsible for the appearance of these operators: poor operator selection, operator reordering, and fading operators. In this work, we propose an automated, cost-free refinement method that removes unnecessary operators from the ansatz without disrupting convergence. Our approach evaluates each operator after ADAPT-VQE optimization by using a function that considers both its amplitude and position in the ansatz, striking a balance between eliminating low-amplitude operators while preserving the natural reduction of coefficients as the ansatz grows. Additionally, a dynamic threshold based on the amplitudes of recent operators enables efficient convergence. We apply this method to several molecular systems and find that it reduces ansatz size and accelerates convergence, particularly in cases with flat energy landscapes. The refinement process incurs no additional computational cost and consistently improves or maintains ADAPT-VQE performance.
Related papers
- Thompson Sampling in Function Spaces via Neural Operators [14.0301500809197]
We propose an extension of Thompson sampling to optimization problems over function spaces where the objective is a known functional of an unknown operator's output.<n>Our algorithm employs a sample-then-optimize approach using neural operator surrogates.
arXiv Detail & Related papers (2025-06-27T04:21:57Z) - Projective Quantum Eigensolver with Generalized Operators [0.0]
We develop a methodology for determining the generalized operators in terms of a closed form residual equations in the PQE framework.
With the application on several molecular systems, we have demonstrated our ansatz achieves similar accuracy to the (disentangled) UCC with singles, doubles and triples.
arXiv Detail & Related papers (2024-10-21T15:40:22Z) - Fast gradient-free optimization of excitations in variational quantum eigensolvers [1.6874375111244329]
We introduce Excitation, a globally-informed gradient-free excitation for physically-motivated ans"atze operators.
Excitation achieves accuracy in a single sweep over the parameters of a fixed ansatz.
arXiv Detail & Related papers (2024-09-09T18:00:00Z) - Reducing measurement costs by recycling the Hessian in adaptive variational quantum algorithms [0.0]
We propose an improved quasi-Newton optimization protocol specifically tailored to adaptive VQAs.
We implement a quasi-Newton algorithm where an approximation to the inverse Hessian matrix is continuously built and grown across the iterations of an adaptive VQA.
arXiv Detail & Related papers (2024-01-10T14:08:04Z) - Energy-Preserving Reduced Operator Inference for Efficient Design and
Control [0.0]
This work presents a physics-preserving reduced model learning approach that targets partial differential equations.
EP-OpInf learns efficient and accurate reduced models that retain this energy-preserving structure.
arXiv Detail & Related papers (2024-01-05T16:39:48Z) - Parameterized Projected Bellman Operator [64.129598593852]
Approximate value iteration (AVI) is a family of algorithms for reinforcement learning (RL)
We propose a novel alternative approach based on learning an approximate version of the Bellman operator.
We formulate an optimization problem to learn PBO for generic sequential decision-making problems.
arXiv Detail & Related papers (2023-12-20T09:33:16Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - HEAT: Hardware-Efficient Automatic Tensor Decomposition for Transformer
Compression [69.36555801766762]
We propose a hardware-aware tensor decomposition framework, dubbed HEAT, that enables efficient exploration of the exponential space of possible decompositions.
We experimentally show that our hardware-aware factorized BERT variants reduce the energy-delay product by 5.7x with less than 1.1% accuracy loss.
arXiv Detail & Related papers (2022-11-30T05:31:45Z) - TETRIS-ADAPT-VQE: An adaptive algorithm that yields shallower, denser
circuit ans\"atze [0.0]
We introduce an algorithm called TETRIS-ADAPT-VQE, which iteratively builds up variational ans"atze a few operators at a time.
It results in denser but significantly shallower circuits, without increasing the number of CNOT gates or variational parameters.
These improvements bring us closer to the goal of demonstrating a practical quantum advantage on quantum hardware.
arXiv Detail & Related papers (2022-09-21T18:00:02Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Variance Reduction with Sparse Gradients [82.41780420431205]
Variance reduction methods such as SVRG and SpiderBoost use a mixture of large and small batch gradients.
We introduce a new sparsity operator: The random-top-k operator.
Our algorithm consistently outperforms SpiderBoost on various tasks including image classification, natural language processing, and sparse matrix factorization.
arXiv Detail & Related papers (2020-01-27T08:23:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.