Limits of quantum generative models with classical sampling hardness
- URL: http://arxiv.org/abs/2512.24801v1
- Date: Wed, 31 Dec 2025 11:40:50 GMT
- Title: Limits of quantum generative models with classical sampling hardness
- Authors: Sabrina Herbst, Ivona Brandić, Adrián Pérez-Salinas,
- Abstract summary: We study quantum generative models from the perspective of output distributions.<n>We find that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage.<n>We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
- Score: 2.321580694317368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sampling tasks have been successful in establishing quantum advantages both in theory and experiments. This has fueled the use of quantum computers for generative modeling to create samples following the probability distribution underlying a given dataset. In particular, the potential to build generative models on classically hard distributions would immediately preclude classical simulability, due to theoretical separations. In this work, we study quantum generative models from the perspective of output distributions, showing that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage. In contrast, models outputting data from sparse distributions can be trained. We consider special cases to enhance trainability, and observe that this opens the path for classical algorithms for surrogate sampling. This observed trade-off is linked to verification of quantum processes. We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
Related papers
- Quantum latent distributions in deep generative models [31.751144081456683]
We show that quantum latent distributions can lead to improved generative performance in GANs.<n>This work confirms that near-term quantum processors can expand the capabilities of deep generative models.
arXiv Detail & Related papers (2025-08-27T13:20:01Z) - Overcoming Dimensional Factorization Limits in Discrete Diffusion Models through Quantum Joint Distribution Learning [79.65014491424151]
We propose a quantum Discrete Denoising Diffusion Probabilistic Model (QD3PM)<n>It enables joint probability learning through diffusion and denoising in exponentially large Hilbert spaces.<n>This paper establishes a new theoretical paradigm in generative models by leveraging the quantum advantage in joint distribution learning.
arXiv Detail & Related papers (2025-05-08T11:48:21Z) - Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.<n>This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.<n>The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - Quantum-Noise-Driven Generative Diffusion Models [1.6385815610837167]
We propose three quantum-noise-driven generative diffusion models that could be experimentally tested on real quantum systems.
The idea is to harness unique quantum features, in particular the non-trivial interplay among coherence, entanglement and noise.
Our results are expected to pave the way for new quantum-inspired or quantum-based generative diffusion algorithms.
arXiv Detail & Related papers (2023-08-23T09:09:32Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Protocols for classically training quantum generative models on
probability distributions [17.857341127079305]
Quantum Generative Modelling (QGM) relies on preparing quantum states and generating samples as hidden - or known - probability distributions.
We propose protocols for classical training of QGMs based on circuits of the specific type that admit an efficient gradient.
We numerically demonstrate the end-to-end training of IQP circuits using probability distributions for up to 30 qubits on a regular desktop computer.
arXiv Detail & Related papers (2022-10-24T17:57:09Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Learnability of the output distributions of local quantum circuits [53.17490581210575]
We investigate, within two different oracle models, the learnability of quantum circuit Born machines.
We first show a negative result, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable.
We show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable.
arXiv Detail & Related papers (2021-10-11T18:00:20Z) - Enhancing Generative Models via Quantum Correlations [1.6099403809839032]
Generative modeling using samples drawn from the probability distribution constitutes a powerful approach for unsupervised machine learning.
We show theoretically that such quantum correlations provide a powerful resource for generative modeling.
We numerically test this separation on standard machine learning data sets and show that it holds for practical problems.
arXiv Detail & Related papers (2021-01-20T22:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.