A rigorous hybridization of variational quantum eigensolver and classical neural network
- URL: http://arxiv.org/abs/2602.17295v1
- Date: Thu, 19 Feb 2026 11:58:50 GMT
- Title: A rigorous hybridization of variational quantum eigensolver and classical neural network
- Authors: Minwoo Kim, Kyoung Keun Park, Kyungmin Lee, Jeongho Bang, Taehyun Kim,
- Abstract summary: Current approaches, such as diagonal non-unitary post-processing (DNP), cannot satisfy requirements simultaneously.<n>We develop a normalization-free alternative, the unitary variational hybrid eigensolver (U-VQNHE)
- Score: 21.337951716874414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural post-processing has been proposed as a lightweight route to enhance variational quantum eigensolvers by learning how to reweight measurement outcomes. In this work, we identify three general desiderata for such data-driven neural post-processing -- (i) self-contained training without prior knowledge, (ii) polynomial resources, and (iii) variational consistency -- and show that current approaches, such as diagonal non-unitary post-processing (DNP), cannot satisfy these requirements simultaneously. The obstruction is intrinsic: with finite sampling, normalization becomes a statistical bottleneck, and support mismatch between numerator and denominator estimators can render the empirical objective ill-conditioned and even sub-variational. Moreover, to reproduce the ground state with constant-depth ansatzes or with linear-depth circuits forming unitary 2-designs, the required reweighting range (and hence the sampling cost) grows exponentially with the number of qubits. Motivated by this no-go result, we develop a normalization-free alternative, the unitary variational quantum-neural hybrid eigensolver (U-VQNHE). U-VQNHE retains the practical appeal of a learnable diagonal post-processing layer while guaranteeing variational safety, and numerical experiments on transverse-field Ising models demonstrate improved accuracy and robustness over both VQE and DNP-based variants.
Related papers
- Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - Schrodinger Neural Network and Uncertainty Quantification: Quantum Machine [0.0]
We introduce the Schrodinger Neural Network (SNN), a principled architecture for conditional density estimation and uncertainty.<n>The SNN maps each input to a normalized wave function on the output domain and computes predictive probabilities via the Born rule.
arXiv Detail & Related papers (2025-10-27T15:52:47Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - Multivariate unbounded quantum regression via log-ratio probabilities mitigating barren plateaus [1.6317061277457001]
We introduce a novel and simple post-processing method utilizing log-ratio probabilities (LRPs) of quantum states.<n>Our approach exponentially expands the number of regression outputs relative to qubit count, thus significantly improving parameter and qubit efficiency.
arXiv Detail & Related papers (2025-06-25T05:10:24Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Harmonizing SO(3)-Equivariance with Neural Expressiveness: a Hybrid Deep Learning Framework Oriented to the Prediction of Electronic Structure Hamiltonian [36.13416266854978]
HarmoSE is a two-stage cascaded regression framework for deep learning.
First stage predicts Hamiltonians with abundant SO(3)-equivariant features extracted.
Second stage refines the first stage's output as a fine-grained prediction of Hamiltonians.
arXiv Detail & Related papers (2024-01-01T12:57:15Z) - DiffHybrid-UQ: Uncertainty Quantification for Differentiable Hybrid
Neural Modeling [4.76185521514135]
We introduce a novel method, DiffHybrid-UQ, for effective and efficient uncertainty propagation and estimation in hybrid neural differentiable models.
Specifically, our approach effectively discerns and quantifies both aleatoric uncertainties, arising from data noise, and epistemic uncertainties, resulting from model-form discrepancies and data sparsity.
arXiv Detail & Related papers (2023-12-30T07:40:47Z) - Hybrid Ground-State Quantum Algorithms based on Neural Schrödinger Forging [0.0]
Entanglement forging based variational algorithms leverage the bi- partition of quantum systems.
We propose a new method for entanglement forging employing generative neural networks to identify the most pertinent bitstrings.
We show that the proposed algorithm achieves comparable or superior performance compared to the existing standard implementation of entanglement forging.
arXiv Detail & Related papers (2023-07-05T20:06:17Z) - Identification of quantum entanglement with Siamese convolutional neural networks and semi-supervised learning [0.0]
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms.
In this study, we use deep convolutional NNs, a type of supervised machine learning, to identify quantum entanglement for any bi Partition in a 3-qubit system.
arXiv Detail & Related papers (2022-10-13T23:17:55Z) - Toward Physically Realizable Quantum Neural Networks [15.018259942339446]
Current solutions for quantum neural networks (QNNs) pose challenges concerning their scalability.
The exponential state space of QNNs poses challenges for the scalability of training procedures.
This paper presents a new model for QNNs that relies on band-limited Fourier expansions of transfer functions of quantum perceptrons.
arXiv Detail & Related papers (2022-03-22T23:03:32Z) - Nonuniform-to-Uniform Quantization: Towards Accurate Quantization via
Generalized Straight-Through Estimation [48.838691414561694]
Nonuniform-to-Uniform Quantization (N2UQ) is a method that can maintain the strong representation ability of nonuniform methods while being hardware-friendly and efficient.
N2UQ outperforms state-of-the-art nonuniform quantization methods by 0.71.8% on ImageNet.
arXiv Detail & Related papers (2021-11-29T18:59:55Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.