Universality and kernel-adaptive training for classically trained, quantum-deployed generative models
- URL: http://arxiv.org/abs/2510.08476v1
- Date: Thu, 09 Oct 2025 17:17:34 GMT
- Title: Universality and kernel-adaptive training for classically trained, quantum-deployed generative models
- Authors: Andrii Kurkin, Kevin Shen, Susanne Pielawa, Hao Wang, Vedran Dunjko,
- Abstract summary: The instantaneous quantum (IQP) quantum circuit Born machine (QCBM) has been proposed as a promising quantum generative model over bitstrings.<n>Recent works have shown that the training of IQP-QCBM is classically tractable w.r.t. the so-called Gaussian kernel maximum mean discrepancy (MMD) loss function.<n>We show that in the kernel-adaptive method, the convergence of the MMD value implies weak convergence in distribution of the generator.
- Score: 7.192684088403013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The instantaneous quantum polynomial (IQP) quantum circuit Born machine (QCBM) has been proposed as a promising quantum generative model over bitstrings. Recent works have shown that the training of IQP-QCBM is classically tractable w.r.t. the so-called Gaussian kernel maximum mean discrepancy (MMD) loss function, while maintaining the potential of a quantum advantage for sampling itself. Nonetheless, the model has a number of aspects where improvements would be important for more general utility: (1) the basic model is known to be not universal - i.e. it is not capable of representing arbitrary distributions, and it was not known whether it is possible to achieve universality by adding hidden (ancillary) qubits; (2) a fixed Gaussian kernel used in the MMD loss can cause training issues, e.g., vanishing gradients. In this paper, we resolve the first question and make decisive strides on the second. We prove that for an $n$-qubit IQP generator, adding $n + 1$ hidden qubits makes the model universal. For the latter, we propose a kernel-adaptive training method, where the kernel is adversarially trained. We show that in the kernel-adaptive method, the convergence of the MMD value implies weak convergence in distribution of the generator. We also analytically analyze the limitations of the MMD-based training method. Finally, we verify the performance benefits on the dataset crafted to spotlight improvements by the suggested method. The results show that kernel-adaptive training outperforms a fixed Gaussian kernel in total variation distance, and the gap increases with the dataset dimensionality. These modifications and analyses shed light on the limits and potential of these new quantum generative methods, which could offer the first truly scalable insights in the comparative capacities of classical versus quantum models, even without access to scalable quantum computers.
Related papers
- Characterizing Trainability of Instantaneous Quantum Polynomial Circuit Born Machines [7.716642023459826]
Instantaneous quantum quantum circuit Born machines (IQP-QCBMs) have been proposed as quantum generative models.<n>We show that barren plateaus depend on the generator set and the spectrum of the chosen kernel.<n>We identify regimes in which low-weight-biased kernels avoid exponential suppression in structured topologies.
arXiv Detail & Related papers (2026-02-11T17:12:56Z) - Calibration of Quantum Devices via Robust Statistical Methods [45.464983015777314]
We numerically analyze advanced statistical methods for Bayesian inference against the state-of-the-art in quantum parameter learning.<n>We show advantages of these approaches over existing ones, namely under multi-modality and high dimensionality.<n>Our findings have applications in challenging quantumcharacterization tasks namely learning the dynamics of open quantum systems.
arXiv Detail & Related papers (2025-07-09T15:22:17Z) - Benchmarking of quantum fidelity kernels for Gaussian process regression [1.7287035469433212]
Quantum computing algorithms have been shown to produce performant quantum kernels for machine-learning classification problems.
We show that quantum kernels can achieve the same, though not better, expressivity as classical kernels for regression problems.
arXiv Detail & Related papers (2024-07-22T18:19:48Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Quantum Kernel Machine Learning With Continuous Variables [0.0]
We represent quantum kernels as closed form functions for continuous variable quantum computing platforms.<n>We show every kernel can be expressed as the product of a Gaussian and an algebraic function of the parameters of the feature map.<n>We prove kernels defined by feature maps of infinite stellar rank, such as GKP-state encodings, can be approximated arbitrarily well by kernels defined by feature maps of finite stellar rank.
arXiv Detail & Related papers (2024-01-11T03:49:40Z) - Randomized semi-quantum matrix processing [0.0]
We present a hybrid quantum-classical framework for simulating generic matrix functions.
The method is based on randomization over the Chebyshev approximation of the target function.
We prove advantages on average depths, including quadratic speed-ups on costly parameters.
arXiv Detail & Related papers (2023-07-21T18:00:28Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Are Quantum Circuits Better than Neural Networks at Learning
Multi-dimensional Discrete Data? An Investigation into Practical Quantum
Circuit Generative Models [0.0]
We show that multi-layer parameterized quantum circuits (MPQCs) are more expressive than classical neural networks (NNs)
We organize available sources into a systematic proof on why MPQCs are able to generate probability distributions that cannot be efficiently simulated classically.
We address practical issues such as how to efficiently train a quantum circuit with only limited samples, how to efficiently calculate the gradient (quantum) and how to alleviate modal collapse.
arXiv Detail & Related papers (2022-12-13T05:31:31Z) - Automatic and effective discovery of quantum kernels [41.61572387137452]
Quantum computing can empower machine learning models by enabling kernel machines to leverage quantum kernels for representing similarity measures between data.<n>We present an approach to this problem, which employs optimization techniques, similar to those used in neural architecture search and AutoML.<n>The results obtained by testing our approach on a high-energy physics problem demonstrate that, in the best-case scenario, we can either match or improve testing accuracy with respect to the manual design approach.
arXiv Detail & Related papers (2022-09-22T16:42:14Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Quantum Kernel Methods for Solving Differential Equations [21.24186888129542]
We propose several approaches for solving differential equations (DEs) with quantum kernel methods.
We compose quantum models as weighted sums of kernel functions, where variables are encoded using feature maps and model derivatives are represented.
arXiv Detail & Related papers (2022-03-16T18:56:35Z) - Preparation of excited states for nuclear dynamics on a quantum computer [117.44028458220427]
We study two different methods to prepare excited states on a quantum computer.
We benchmark these techniques on emulated and real quantum devices.
These findings show that quantum techniques designed to achieve good scaling on fault tolerant devices might also provide practical benefits on devices with limited connectivity and gate fidelity.
arXiv Detail & Related papers (2020-09-28T17:21:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.