Limitations of Quantum Advantage in Unsupervised Machine Learning
- URL: http://arxiv.org/abs/2511.10709v1
- Date: Thu, 13 Nov 2025 08:50:40 GMT
- Title: Limitations of Quantum Advantage in Unsupervised Machine Learning
- Authors: Apoorva D. Patel,
- Abstract summary: Machine learning models are used for pattern recognition analysis of big data.<n>Quantum extensions of these models replace classical probability distributions with quantum density matrices.<n>The problem-dependent extent of quantum advantage has implications for both data analysis and sensing applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models are used for pattern recognition analysis of big data, without direct human intervention. The task of unsupervised learning is to find the probability distribution that would best describe the available data, and then use it to make predictions for observables of interest. Classical models generally fit the data to Boltzmann distribution of Hamiltonians with a large number of tunable parameters. Quantum extensions of these models replace classical probability distributions with quantum density matrices. An advantage can be obtained only when features of density matrices that are absent in classical probability distributions are exploited. Such situations depend on the input data as well as the targeted observables. Explicit examples are discussed that bring out the constraints limiting possible quantum advantage. The problem-dependent extent of quantum advantage has implications for both data analysis and sensing applications.
Related papers
- Limits of quantum generative models with classical sampling hardness [2.321580694317368]
We study quantum generative models from the perspective of output distributions.<n>We find that models that anticoncentrate are not trainable on average, including those exhibiting quantum advantage.<n>We conclude that quantum advantage can still be found in generative models, although its source must be distinct from anticoncentration.
arXiv Detail & Related papers (2025-12-31T11:40:50Z) - Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders [0.5999777817331317]
variational autoencoders (VAEs) have shown promise in extracting the hidden physical features of some input data.<n>VAEs must account for its intrinsic randomness and complex correlations when dealing with quantum data.<n>Here, we demonstrate that two key modifications enable VAEs to learn physically meaningful latent representations.
arXiv Detail & Related papers (2025-06-13T17:39:41Z) - Overcoming Dimensional Factorization Limits in Discrete Diffusion Models through Quantum Joint Distribution Learning [79.65014491424151]
We propose a quantum Discrete Denoising Diffusion Probabilistic Model (QD3PM)<n>It enables joint probability learning through diffusion and denoising in exponentially large Hilbert spaces.<n>This paper establishes a new theoretical paradigm in generative models by leveraging the quantum advantage in joint distribution learning.
arXiv Detail & Related papers (2025-05-08T11:48:21Z) - Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.<n>This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.<n>The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - When Quantum and Classical Models Disagree: Learning Beyond Minimum Norm Least Square [1.9223856107206057]
Variational Quantum Circuits (VQCs) are important candidates for useful application of quantum computing.<n>We propose a general theory of quantum advantages for regression problems.<n>We show that it is possible to design quantum models that cannot be classically approximated with good generalization.
arXiv Detail & Related papers (2024-11-07T18:18:38Z) - Observable Statistical Mechanics [0.0]
We propose a general and computationally easy approach to determine the stationary probability distribution of observables.<n>We show that the resulting theory accurately predicts stationary probability distributions without detailed microscopic information like the energy eigenstates.
arXiv Detail & Related papers (2023-09-26T18:18:39Z) - On the Sample Complexity of Quantum Boltzmann Machine Learning [0.0]
We give an operational definition of QBM learning in terms of the difference in expectation values between the model and target.<n>We prove that a solution can be obtained with gradient descent using at most a number of Gibbs states.<n>In particular, we give pre-training strategies based on mean-field, Gaussian Fermionic, and geometrically local Hamiltonians.
arXiv Detail & Related papers (2023-06-26T18:00:50Z) - Classical Verification of Quantum Learning [42.362388367152256]
We develop a framework for classical verification of quantum learning.
We propose a new quantum data access model that we call "mixture-of-superpositions" quantum examples.
Our results demonstrate that the potential power of quantum data for learning tasks, while not unlimited, can be utilized by classical agents.
arXiv Detail & Related papers (2023-06-08T00:31:27Z) - Transition Role of Entangled Data in Quantum Machine Learning [51.6526011493678]
Entanglement serves as the resource to empower quantum computing.
Recent progress has highlighted its positive impact on learning quantum dynamics.
We establish a quantum no-free-lunch (NFL) theorem for learning quantum dynamics using entangled data.
arXiv Detail & Related papers (2023-06-06T08:06:43Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Learnability of the output distributions of local quantum circuits [53.17490581210575]
We investigate, within two different oracle models, the learnability of quantum circuit Born machines.
We first show a negative result, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable.
We show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable.
arXiv Detail & Related papers (2021-10-11T18:00:20Z) - The probabilistic world [0.0]
We show that cellular automata are quantum systems in a formulation with discrete time steps and real wave functions.
The quantum formalism for classical statistics is a powerful tool which allows us to implement for generalized Ising models the momentum observable.
arXiv Detail & Related papers (2020-11-04T14:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.