Quantum Denoising Diffusion Models
- URL: http://arxiv.org/abs/2401.07049v1
- Date: Sat, 13 Jan 2024 11:38:08 GMT
- Title: Quantum Denoising Diffusion Models
- Authors: Michael K\"olle, Gerhard Stenzel, Jonas Stein, Sebastian Zielinski,
Bj\"orn Ommer, Claudia Linnhoff-Popien
- Abstract summary: We introduce two quantum diffusion models and benchmark their capabilities against their classical counterparts.
Our models surpass the classical models with similar parameter counts in terms of performance metrics FID, SSIM, and PSNR.
- Score: 4.763438526927999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, machine learning models like DALL-E, Craiyon, and Stable
Diffusion have gained significant attention for their ability to generate
high-resolution images from concise descriptions. Concurrently, quantum
computing is showing promising advances, especially with quantum machine
learning which capitalizes on quantum mechanics to meet the increasing
computational requirements of traditional machine learning algorithms. This
paper explores the integration of quantum machine learning and variational
quantum circuits to augment the efficacy of diffusion-based image generation
models. Specifically, we address two challenges of classical diffusion models:
their low sampling speed and the extensive parameter requirements. We introduce
two quantum diffusion models and benchmark their capabilities against their
classical counterparts using MNIST digits, Fashion MNIST, and CIFAR-10. Our
models surpass the classical models with similar parameter counts in terms of
performance metrics FID, SSIM, and PSNR. Moreover, we introduce a consistency
model unitary single sampling architecture that combines the diffusion
procedure into a single step, enabling a fast one-step image generation.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Hybrid Quantum-Classical Normalizing Flow [5.85475369017678]
We propose a hybrid quantum-classical normalizing flow (HQCNF) model based on parameterized quantum circuits.
We test our model on the image generation problem.
Compared with other quantum generative models, such as quantum generative adversarial networks (QGAN), our model achieves lower (better) Fr'echet distance (FID) score.
arXiv Detail & Related papers (2024-05-22T16:37:22Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Quantum circuit synthesis with diffusion models [0.6554326244334868]
We use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation.
We steer the model to produce desired quantum operations within gate-based quantum circuits.
We envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
arXiv Detail & Related papers (2023-11-03T17:17:08Z) - Quantum-Noise-Driven Generative Diffusion Models [1.6385815610837167]
We propose three quantum-noise-driven generative diffusion models that could be experimentally tested on real quantum systems.
The idea is to harness unique quantum features, in particular the non-trivial interplay among coherence, entanglement and noise.
Our results are expected to pave the way for new quantum-inspired or quantum-based generative diffusion algorithms.
arXiv Detail & Related papers (2023-08-23T09:09:32Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Learning hard distributions with quantum-enhanced Variational
Autoencoders [2.545905720487589]
We introduce a quantum-enhanced VAE (QeVAE) that uses quantum correlations to improve the fidelity over classical VAEs.
We empirically show that the QeVAE outperforms classical models on several classes of quantum states.
Our work paves the way for new applications of quantum generative learning algorithms.
arXiv Detail & Related papers (2023-05-02T16:50:24Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - Generation of High-Resolution Handwritten Digits with an Ion-Trap
Quantum Computer [55.41644538483948]
We implement a quantum-circuit based generative model to learn and sample the prior distribution of a Generative Adversarial Network.
We train this hybrid algorithm on an ion-trap device based on $171$Yb$+$ ion qubits to generate high-quality images.
arXiv Detail & Related papers (2020-12-07T18:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.