VAE-QWGAN: Improving Quantum GANs for High Resolution Image Generation
- URL: http://arxiv.org/abs/2409.10339v1
- Date: Mon, 16 Sep 2024 14:52:22 GMT
- Title: VAE-QWGAN: Improving Quantum GANs for High Resolution Image Generation
- Authors: Aaron Mark Thomas, Sharu Theresa Jose,
- Abstract summary: The VAE-QWGAN integrates the VAE decoder and QGAN generator into a single quantum model with shared parameters.
We evaluate the model's performance on MNIST/Fashion-MNIST datasets, and demonstrate improved quality and diversity of generated images.
- Score: 4.297070083645049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel hybrid quantum generative model, the VAE-QWGAN, which combines the strengths of a classical Variational AutoEncoder (VAE) with a hybrid Quantum Wasserstein Generative Adversarial Network (QWGAN). The VAE-QWGAN integrates the VAE decoder and QGAN generator into a single quantum model with shared parameters, utilizing the VAE's encoder for latent vector sampling during training. To generate new data from the trained model at inference, input latent vectors are sampled from a Gaussian Mixture Model (GMM), learnt on the training latent vectors. This, in turn, enhances the diversity and quality of generated images. We evaluate the model's performance on MNIST/Fashion-MNIST datasets, and demonstrate improved quality and diversity of generated images compared to existing approaches.
Related papers
- Quantum Generative Models for Image Generation: Insights from MNIST and MedMNIST [0.0]
We introduce two novel noise strategies: intrinsic quantum-generated noise and a tailored noise scheduling mechanism.
We evaluate our model on MNIST and MedMNIST datasets to examine its feasibility and performance.
arXiv Detail & Related papers (2025-03-30T06:36:22Z) - Quantum Down Sampling Filter for Variational Auto-encoder [0.504868948270058]
Variational autoencoders (VAEs) are fundamental for generative modeling and image reconstruction.
This study introduces a hybrid model, quantum variational autoencoder (Q-VAE)
Q-VAE integrates quantum encoding within the encoder while utilizing fully connected layers to extract meaningful representations.
arXiv Detail & Related papers (2025-01-09T11:08:55Z) - Efficient Generative Modeling with Residual Vector Quantization-Based Tokens [5.949779668853557]
ResGen is an efficient RVQ-based discrete diffusion model that generates high-fidelity samples without compromising sampling speed.
We validate the efficacy and generalizability of the proposed method on two challenging tasks: conditional image generation on ImageNet 256x256 and zero-shot text-to-speech synthesis.
As we scale the depth of RVQ, our generative models exhibit enhanced generation fidelity or faster sampling speeds compared to similarly sized baseline models.
arXiv Detail & Related papers (2024-12-13T15:31:17Z) - A Matrix Product State Model for Simultaneous Classification and Generation [0.8192907805418583]
Quantum machine learning (QML) is a rapidly expanding field that merges the principles of quantum computing with the techniques of machine learning.
Here, we present a novel matrix product state (MPS) model, where the MPS functions as both a classifier and a generator.
Our contributions offer insights into the mechanics of tensor network methods for generation tasks.
arXiv Detail & Related papers (2024-06-25T10:23:36Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Approximately Equivariant Quantum Neural Network for $p4m$ Group
Symmetries in Images [30.01160824817612]
This work proposes equivariant Quantum Convolutional Neural Networks (EquivQCNNs) for image classification under planar $p4m$ symmetry.
We present the results tested in different use cases, such as phase detection of the 2D Ising model and classification of the extended MNIST dataset.
arXiv Detail & Related papers (2023-10-03T18:01:02Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes [23.682509357305406]
Autoencoders and their variants are among the most widely used models in representation learning and generative modeling.
We propose a novel Sparse Gaussian Process Bayesian Autoencoder model in which we impose fully sparse Gaussian Process priors on the latent space of a Bayesian Autoencoder.
arXiv Detail & Related papers (2023-02-09T09:57:51Z) - Hybrid Quantum-Classical Generative Adversarial Network for High
Resolution Image Generation [14.098992977726942]
Quantum machine learning (QML) has received increasing attention due to its potential to outperform classical machine learning methods in various problems.
A subclass of QML methods is quantum generative adversarial networks (QGANs) which have been studied as a quantum counterpart of classical GANs.
Here we integrate classical and quantum techniques to propose a new hybrid quantum-classical GAN framework.
arXiv Detail & Related papers (2022-12-22T11:18:35Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Diffusion bridges vector quantized Variational AutoEncoders [0.0]
We show that our model is competitive with the autoregressive prior on the mini-Imagenet dataset.
Our framework also extends the standard VQ-VAE and enables end-to-end training.
arXiv Detail & Related papers (2022-02-10T08:38:12Z) - Controllable and Compositional Generation with Latent-Space Energy-Based
Models [60.87740144816278]
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications.
In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes.
By composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
arXiv Detail & Related papers (2021-10-21T03:31:45Z) - Quantum Machine Learning with SQUID [64.53556573827525]
We present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems.
We provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset.
arXiv Detail & Related papers (2021-04-30T21:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.