A New Initial Distribution for Quantum Generative Adversarial Networks
to Load Probability Distributions
- URL: http://arxiv.org/abs/2306.12303v2
- Date: Thu, 10 Aug 2023 01:40:15 GMT
- Title: A New Initial Distribution for Quantum Generative Adversarial Networks
to Load Probability Distributions
- Authors: Yuichi Sano, Ryosuke Koga, Masaya Abe, Kei Nakagawa
- Abstract summary: We propose a novel method for generating an initial distribution that improves the learning efficiency of qGANs.
Our method uses the classical process of label replacement to generate various probability distributions in shallow quantum circuits.
- Score: 4.043200001974071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum computers are gaining attention for their ability to solve certain
problems faster than classical computers, and one example is the quantum
expectation estimation algorithm that accelerates the widely-used Monte Carlo
method in fields such as finance. A previous study has shown that quantum
generative adversarial networks(qGANs), a quantum circuit version of generative
adversarial networks(GANs), can generate the probability distribution necessary
for the quantum expectation estimation algorithm in shallow quantum circuits.
However, a previous study has also suggested that the convergence speed and
accuracy of the generated distribution can vary greatly depending on the
initial distribution of qGANs' generator. In particular, the effectiveness of
using a normal distribution as the initial distribution has been claimed, but
it requires a deep quantum circuit, which may lose the advantage of qGANs.
Therefore, in this study, we propose a novel method for generating an initial
distribution that improves the learning efficiency of qGANs. Our method uses
the classical process of label replacement to generate various probability
distributions in shallow quantum circuits. We demonstrate that our proposed
method can generate the log-normal distribution, which is pivotal in financial
engineering, as well as the triangular distribution and the bimodal
distribution, more efficiently than current methods. Additionally, we show that
the initial distribution proposed in our research is related to the problem of
determining the initial weights for qGANs.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Efficient quantum loading of probability distributions through Feynman
propagators [2.56711111236449]
We present quantum algorithms for the loading of probability distributions using Hamiltonian simulation for one dimensional Hamiltonians of the form $hat H= Delta + V(x) mathbbI$.
We consider the potentials $V(x)$ for which the Feynman propagator is known to have an analytically closed form and utilize these Hamiltonians to load probability distributions into quantum states.
arXiv Detail & Related papers (2023-11-22T21:41:58Z) - Connection between single-layer Quantum Approximate Optimization
Algorithm interferometry and thermal distributions sampling [0.0]
We extend the theoretical derivation of the amplitudes of the eigenstates, and the Boltzmann distributions generated by single-layer QAOA.
We also review the implications that this behavior has from both a practical and fundamental perspective.
arXiv Detail & Related papers (2023-10-13T15:06:58Z) - Quantum state preparation for bell-shaped probability distributions using deconvolution methods [0.0]
We present a hybrid classical-quantum approach to load quantum data.
We use the Jensen-Shannon distance as the cost function to quantify the closeness of the outcome from the classical step and the target distribution.
The output from the deconvolution step is used to construct the quantum circuit required to load the given probability distribution.
arXiv Detail & Related papers (2023-10-08T06:55:47Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Protocols for classically training quantum generative models on
probability distributions [17.857341127079305]
Quantum Generative Modelling (QGM) relies on preparing quantum states and generating samples as hidden - or known - probability distributions.
We propose protocols for classical training of QGMs based on circuits of the specific type that admit an efficient gradient.
We numerically demonstrate the end-to-end training of IQP circuits using probability distributions for up to 30 qubits on a regular desktop computer.
arXiv Detail & Related papers (2022-10-24T17:57:09Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Learnability of the output distributions of local quantum circuits [53.17490581210575]
We investigate, within two different oracle models, the learnability of quantum circuit Born machines.
We first show a negative result, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable.
We show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable.
arXiv Detail & Related papers (2021-10-11T18:00:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.