Quantum mixture-density network for multimodal probabilistic prediction
- URL: http://arxiv.org/abs/2506.09497v1
- Date: Wed, 11 Jun 2025 08:13:29 GMT
- Title: Quantum mixture-density network for multimodal probabilistic prediction
- Authors: Jaemin Seo,
- Abstract summary: Multimodal probability distributions are common in both quantum and classical systems.<n>We introduce a Quantum Mixture-Density Network (Q-MDN) that employs parameterized quantum circuits to efficiently model multimodal distributions.
- Score: 1.6317061277457001
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multimodal probability distributions are common in both quantum and classical systems, yet modeling them remains challenging when the number of modes is large or unknown. Classical methods such as mixture-density networks (MDNs) scale poorly, requiring parameter counts that grow quadratically with the number of modes. We introduce a Quantum Mixture-Density Network (Q-MDN) that employs parameterized quantum circuits to efficiently model multimodal distributions. By representing an exponential number of modes with a compact set of qubits and parameters, Q-MDN predicts Gaussian mixture components with high resolution. We evaluate Q-MDN on two benchmark tasks: the quantum double-slit experiment and chaotic logistic bifurcation. In both cases, Q-MDN outperforms classical MDNs in mode separability and prediction sharpness under equal parameter budgets. Our results demonstrate a practical quantum advantage in probabilistic regression and highlight the potential of quantum machine learning in capturing complex stochastic behavior beyond the reach of classical models.
Related papers
- Quantum and Hybrid Machine-Learning Models for Materials-Science Tasks [0.0]
We design and estimate quantum machine learning and hybrid quantum-classical models.<n>We predict stacking fault energies and solutes that can ductilize magnesium.
arXiv Detail & Related papers (2025-07-10T20:29:16Z) - Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.<n>This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.<n>The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Learning hard distributions with quantum-enhanced Variational
Autoencoders [2.545905720487589]
We introduce a quantum-enhanced VAE (QeVAE) that uses quantum correlations to improve the fidelity over classical VAEs.
We empirically show that the QeVAE outperforms classical models on several classes of quantum states.
Our work paves the way for new applications of quantum generative learning algorithms.
arXiv Detail & Related papers (2023-05-02T16:50:24Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - On Quantum Circuits for Discrete Graphical Models [1.0965065178451106]
We provide the first method that allows one to provably generate unbiased and independent samples from general discrete factor models.
Our method is compatible with multi-body interactions and its success probability does not depend on the number of variables.
Experiments with quantum simulation as well as actual quantum hardware show that our method can carry out sampling and parameter learning on quantum computers.
arXiv Detail & Related papers (2022-06-01T11:03:51Z) - Introducing Non-Linearity into Quantum Generative Models [0.0]
We introduce a model that adds non-linear activations via a neural network structure onto the standard Born Machine framework.
We compare our non-linear QNBM to the linear Quantum Circuit Born Machine.
We show that while both models can easily learn a trivial uniform probability distribution, the QNBM achieves an almost 3x smaller error rate than a QCBM.
arXiv Detail & Related papers (2022-05-28T18:59:49Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Learnability and Complexity of Quantum Samples [26.425493366198207]
Given a quantum circuit, a quantum computer can sample the output distribution exponentially faster in the number of bits than classical computers.
Can we learn the underlying quantum distribution using models with training parameters that scale in n under a fixed training time?
We study four kinds of generative models: Deep Boltzmann machine (DBM), Generative Adrial Networks (GANs), Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated by deep random circuits.
arXiv Detail & Related papers (2020-10-22T18:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.