A super-polynomial quantum-classical separation for density modelling
- URL: http://arxiv.org/abs/2210.14936v1
- Date: Wed, 26 Oct 2022 18:00:03 GMT
- Title: A super-polynomial quantum-classical separation for density modelling
- Authors: Niklas Pirnay, Ryan Sweke, Jens Eisert, Jean-Pierre Seifert
- Abstract summary: We show that fault-tolerant quantum computers can offer a super-polynomial advantage over classical learning algorithms.
We also show that any weak pseudo-random function can be used to construct a classically hard density modelling problem.
- Score: 9.980327191634071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Density modelling is the task of learning an unknown probability density
function from samples, and is one of the central problems of unsupervised
machine learning. In this work, we show that there exists a density modelling
problem for which fault-tolerant quantum computers can offer a super-polynomial
advantage over classical learning algorithms, given standard cryptographic
assumptions. Along the way, we provide a variety of additional results and
insights, of potential interest for proving future distribution learning
separations between quantum and classical learning algorithms. Specifically, we
(a) provide an overview of the relationships between hardness results in
supervised learning and distribution learning, and (b) show that any weak
pseudo-random function can be used to construct a classically hard density
modelling problem. The latter result opens up the possibility of proving
quantum-classical separations for density modelling based on weaker assumptions
than those necessary for pseudo-random functions.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantum sequential scattering model for quantum state learning [6.040584660207655]
We devise the quantum scattering model (QSSM) to overcome the vanishing problem to a large class of high-dimensional sequential target states possessing gradient-scaled Schmidt ranks.
Our work has indicated that an increasing entanglement, a property of quantum states, in the target states, necessitates a larger scaled model, which could reduce our model's learning performance and efficiency.
arXiv Detail & Related papers (2023-10-11T18:31:40Z) - Quantum-Noise-Driven Generative Diffusion Models [1.6385815610837167]
We propose three quantum-noise-driven generative diffusion models that could be experimentally tested on real quantum systems.
The idea is to harness unique quantum features, in particular the non-trivial interplay among coherence, entanglement and noise.
Our results are expected to pave the way for new quantum-inspired or quantum-based generative diffusion algorithms.
arXiv Detail & Related papers (2023-08-23T09:09:32Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Learnability of the output distributions of local quantum circuits [53.17490581210575]
We investigate, within two different oracle models, the learnability of quantum circuit Born machines.
We first show a negative result, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable.
We show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable.
arXiv Detail & Related papers (2021-10-11T18:00:20Z) - Generative Quantum Learning of Joint Probability Distribution Functions [1.221966660783828]
We design quantum machine learning algorithms to model copulas.
We show that any copula can be naturally mapped to a multipartite maximally entangled state.
A variational ansatz we christen as a qopula' creates arbitrary correlations between variables.
arXiv Detail & Related papers (2021-09-13T20:50:15Z) - Outlier-Robust Learning of Ising Models Under Dobrushin's Condition [57.89518300699042]
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
arXiv Detail & Related papers (2021-02-03T18:00:57Z) - Enhancing Generative Models via Quantum Correlations [1.6099403809839032]
Generative modeling using samples drawn from the probability distribution constitutes a powerful approach for unsupervised machine learning.
We show theoretically that such quantum correlations provide a powerful resource for generative modeling.
We numerically test this separation on standard machine learning data sets and show that it holds for practical problems.
arXiv Detail & Related papers (2021-01-20T22:57:22Z) - On the Quantum versus Classical Learnability of Discrete Distributions [9.980327191634071]
We study the comparative power of classical and quantum learners for generative modelling within the Probably Approximately Correct (PAC) framework.
Our primary result is the explicit construction of a class of discrete probability distributions which, under the decisional Diffie-Hellman assumption, is provably not efficiently PAC learnable by a classical generative modelling algorithm.
This class of distributions provides a concrete example of a generative modelling problem for which quantum learners exhibit a provable advantage over classical learning algorithms.
arXiv Detail & Related papers (2020-07-28T19:43:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.