Do Quantum Circuit Born Machines Generalize?
- URL: http://arxiv.org/abs/2207.13645v4
- Date: Sat, 13 May 2023 18:20:11 GMT
- Title: Do Quantum Circuit Born Machines Generalize?
- Authors: Kaitlin Gili, Mohamed Hibat-Allah, Marta Mauri, Chris Ballance,
Alejandro Perdomo-Ortiz
- Abstract summary: We present the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models.
We show that the QCBM is able to effectively learn the reweighted dataset and generate unseen samples with higher quality than those in the training set.
- Score: 58.720142291102135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent proposals of quantum circuit models for generative tasks, the
discussion about their performance has been limited to their ability to
reproduce a known target distribution. For example, expressive model families
such as Quantum Circuit Born Machines (QCBMs) have been almost entirely
evaluated on their capability to learn a given target distribution with high
accuracy. While this aspect may be ideal for some tasks, it limits the scope of
a generative model's assessment to its ability to memorize data rather than
generalize. As a result, there has been little understanding of a model's
generalization performance and the relation between such capability and the
resource requirements, e.g., the circuit depth and the amount of training data.
In this work, we leverage upon a recently proposed generalization evaluation
framework to begin addressing this knowledge gap. We first investigate the
QCBM's learning process of a cardinality-constrained distribution and see an
increase in generalization performance while increasing the circuit depth. In
the 12-qubit example presented here, we observe that with as few as 30% of the
valid data in the training set, the QCBM exhibits the best generalization
performance toward generating unseen and valid data. Lastly, we assess the
QCBM's ability to generalize not only to valid samples, but to high-quality
bitstrings distributed according to an adequately re-weighted distribution. We
see that the QCBM is able to effectively learn the reweighted dataset and
generate unseen samples with higher quality than those in the training set. To
the best of our knowledge, this is the first work in the literature that
presents the QCBM's generalization performance as an integral evaluation metric
for quantum generative models, and demonstrates the QCBM's ability to
generalize to high-quality, desired novel samples.
Related papers
- Quantum Generative Modeling of Sequential Data with Trainable Token
Embedding [0.0]
A quantum-inspired generative model known as the Born machines have shown great advancements in learning classical and quantum data.
We generalize the embedding method into trainable quantum measurement operators that can be simultaneously honed with MPS.
Our study indicated that combined with trainable embedding, Born machines can exhibit better performance and learn deeper correlations from the dataset.
arXiv Detail & Related papers (2023-11-08T22:56:37Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Reassessing Evaluation Practices in Visual Question Answering: A Case
Study on Out-of-Distribution Generalization [27.437077941786768]
Vision-and-language (V&L) models pretrained on large-scale multimodal data have demonstrated strong performance on various tasks.
We evaluate two pretrained V&L models under different settings by conducting cross-dataset evaluations.
We find that these models tend to learn to solve the benchmark, rather than learning the high-level skills required by the VQA task.
arXiv Detail & Related papers (2022-05-24T16:44:45Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Out-of-distribution generalization for learning quantum dynamics [2.1503874224655997]
We show that one can learn the action of a unitary on entangled states having trained only product states.
This advances the prospects of learning quantum dynamics on near term quantum hardware.
arXiv Detail & Related papers (2022-04-21T17:15:23Z) - Evaluating natural language processing models with generalization
metrics that do not need access to any training or testing data [66.11139091362078]
We provide the first model selection results on large pretrained Transformers from Huggingface using generalization metrics.
Despite their niche status, we find that metrics derived from the heavy-tail (HT) perspective are particularly useful in NLP tasks.
arXiv Detail & Related papers (2022-02-06T20:07:35Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.