TITAN: A Trajectory-Informed Technique for Adaptive Parameter Freezing in Large-Scale VQE
- URL: http://arxiv.org/abs/2509.15193v1
- Date: Thu, 18 Sep 2025 17:50:02 GMT
- Title: TITAN: A Trajectory-Informed Technique for Adaptive Parameter Freezing in Large-Scale VQE
- Authors: Yifeng Peng, Xinyi Li, Samuel Yen-Chi Chen, Kaining Zhang, Zhiding Liang, Ying Wang, Yuxuan Du,
- Abstract summary: Variational quantum Eigensolver (VQE) is a leading candidate for harnessing quantum computers to advance quantum chemistry and materials simulations.<n>We propose a deep learning framework, dubbed Titan, which identifies and freezes inactive parameters of a given ansatze.<n>Titan achieves up to 3 times faster convergence and 40% to 60% fewer circuit evaluations than state-of-the-art baselines.
- Score: 32.19056082853506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variational quantum Eigensolver (VQE) is a leading candidate for harnessing quantum computers to advance quantum chemistry and materials simulations, yet its training efficiency deteriorates rapidly for large Hamiltonians. Two issues underlie this bottleneck: (i) the no-cloning theorem imposes a linear growth in circuit evaluations with the number of parameters per gradient step; and (ii) deeper circuits encounter barren plateaus (BPs), leading to exponentially increasing measurement overheads. To address these challenges, here we propose a deep learning framework, dubbed Titan, which identifies and freezes inactive parameters of a given ansatze at initialization for a specific class of Hamiltonians, reducing the optimization overhead without sacrificing accuracy. The motivation of Titan starts with our empirical findings that a subset of parameters consistently has a negligible influence on training dynamics. Its design combines a theoretically grounded data construction strategy, ensuring each training example is informative and BP-resilient, with an adaptive neural architecture that generalizes across ansatze of varying sizes. Across benchmark transverse-field Ising models, Heisenberg models, and multiple molecule systems up to 30 qubits, Titan achieves up to 3 times faster convergence and 40% to 60% fewer circuit evaluations than state-of-the-art baselines, while matching or surpassing their estimation accuracy. By proactively trimming parameter space, Titan lowers hardware demands and offers a scalable path toward utilizing VQE to advance practical quantum chemistry and materials science.
Related papers
- Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Towards Quantum Enhanced Adversarial Robustness with Rydberg Reservoir Learning [45.92935470813908]
Quantum computing reservoir (QRC) leverages the high-dimensional, nonlinear dynamics inherent in quantum many-body systems.<n>Recent studies indicate that perturbation quantums based on variational circuits remain susceptible to adversarials.<n>We investigate the first systematic evaluation of adversarial robustness in a QR based learning model.
arXiv Detail & Related papers (2025-10-15T12:17:23Z) - Low Cost Bayesian Experimental Design for Quantum Frequency Estimation with Decoherence [45.74830585715129]
We introduce WES: a Window Expansion Strategy for low cost adaptive Bayesian experimental design.<n>We employ empirical cost-reduction techniques to keep the optimization overhead low, curb scaling problems, and enable high degrees of parallelism.<n> Numerical simulations show that WES delivers the most reliable performance and fastest learning rate, saturating the Heisenberg limit.
arXiv Detail & Related papers (2025-08-09T23:41:58Z) - TensoMeta-VQC: A Tensor-Train-Guided Meta-Learning Framework for Robust and Scalable Variational Quantum Computing [60.996803677584424]
TensoMeta-VQC is a novel tensor-train (TT)-guided meta-learning framework designed to improve the robustness and scalability of VQC significantly.<n>Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware.
arXiv Detail & Related papers (2025-08-01T23:37:55Z) - Sculpting Quantum Landscapes: Fubini-Study Metric Conditioning for Geometry Aware Learning in Parameterized Quantum Circuits [0.0]
We present a novel meta learning framework called Sculpture that explicitly conditions the Fubini Study metric tensor to mitigate barren plateaus in variational quantum algorithms.<n>Our theoretical analysis identifies the logarithmic condition number of the Fubini Study metric as a critical geometric quantity governing trainability, optimization dynamics, and generalization.
arXiv Detail & Related papers (2025-06-27T06:30:33Z) - Mitigating Barren Plateaus in Quantum Neural Networks via an AI-Driven Submartingale-Based Framework [3.0617189749929348]
We propose AdaInit to mitigate barren plateaus (BPs) in quantum neural networks (QNNs)<n>AdaInit iteratively synthesizes initial parameters for QNNs that yield non-negligible gradient variance, thereby mitigating BPs.<n>We provide rigorous theoretical analyses of the submartingale-based process and empirically validate that AdaInit consistently outperforms existing methods in maintaining higher gradient variance across various QNN scales.
arXiv Detail & Related papers (2025-02-17T05:57:15Z) - Diffusion-Enhanced Optimization of Variational Quantum Eigensolver for General Hamiltonians [17.975555487972166]
Variational quantum algorithms (VQAs) have emerged as a promising approach for achieving quantum advantage on current noisy quantum devices.<n>However, their large-scale applications are significantly hindered by optimization challenges, such as the barren plateau (BP) phenomenon, local minima, and numerous iteration demands.<n>In this work, we leverage denoising diffusion models (DMs) to address these difficulties.
arXiv Detail & Related papers (2025-01-10T02:32:46Z) - Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Neural Quantum State Study of Fracton Models [3.8068573698649826]
Fracton models host unconventional topological orders in three and higher dimensions.<n>We establish neural quantum states (NQS) as new tools to study phase transitions in these models.<n>Our work demonstrates the remarkable potential of NQS in studying complicated three-dimensional problems.
arXiv Detail & Related papers (2024-06-17T15:58:09Z) - Towards Physically Consistent Deep Learning For Climate Model Parameterizations [46.07009109585047]
parameterizations are a major source of systematic errors and large uncertainties in climate projections.
Deep learning (DL)-based parameterizations, trained on data from computationally expensive short, high-resolution simulations, have shown great promise for improving climate models.
We propose an efficient supervised learning framework for DL-based parameterizations that leads to physically consistent models.
arXiv Detail & Related papers (2024-06-06T10:02:49Z) - Alleviating Barren Plateaus in Parameterized Quantum Machine Learning
Circuits: Investigating Advanced Parameter Initialization Strategies [4.169604865257672]
When with random parameter values, quantum quantum circuits (PQCs) often exhibit barren plateaus (BP)
BPs are vanishing gradients with an increasing number of qubits, hinder optimization in quantum algorithms.
In this paper, we analyze the impact of state-of-the-art parameter initialization strategies from classical machine learning in random PQCs.
arXiv Detail & Related papers (2023-11-22T08:07:53Z) - Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach [47.19265172105025]
We propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN)
egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required.
The architecture is based on a novel mapping from real-world data to Hilbert space.
arXiv Detail & Related papers (2022-01-13T16:35:45Z) - FLIP: A flexible initializer for arbitrarily-sized parametrized quantum
circuits [105.54048699217668]
We propose a FLexible Initializer for arbitrarily-sized Parametrized quantum circuits.
FLIP can be applied to any family of PQCs, and instead of relying on a generic set of initial parameters, it is tailored to learn the structure of successful parameters.
We illustrate the advantage of using FLIP in three scenarios: a family of problems with proven barren plateaus, PQC training to solve max-cut problem instances, and PQC training for finding the ground state energies of 1D Fermi-Hubbard models.
arXiv Detail & Related papers (2021-03-15T17:38:33Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.