Shadow-Frugal Expectation-Value-Sampling Variational Quantum Generative Model
- URL: http://arxiv.org/abs/2412.17039v1
- Date: Sun, 22 Dec 2024 14:35:46 GMT
- Title: Shadow-Frugal Expectation-Value-Sampling Variational Quantum Generative Model
- Authors: Kevin Shen, Andrii Kurkin, Adrián Pérez Salinas, Elvira Shishenina, Vedran Dunjko, Hao Wang,
- Abstract summary: We introduce an Observable-Tunable Expectation Value Sampler (OT-EVS)<n>The resulting model provides enhanced expressivity as compared to standard EVS.<n>We propose an adversarial training method adapted to the needs of OT-EVS.
- Score: 4.509315580235968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expectation Value Samplers (EVSs) are quantum-computer-based generative models that can learn high-dimensional continuous distributions by measuring the expectation values of parameterized quantum circuits regarding selected observables. However, such models may require unaffordable quantum resources for good performance. This work explores the impact of observable choices on the EVS. We introduce an Observable-Tunable Expectation Value Sampler (OT-EVS). The resulting model provides enhanced expressivity as compared to standard EVS. By restricting our selectable observables, it is possible to use the classical shadow measurement scheme to reduce the sample complexity of our algorithm. We further propose an adversarial training method adapted to the needs of OT-EVS. This training prioritizes classical updates of observables, minimizing the more costly updates of quantum circuit parameters. Numerical experiments confirm our model's expressivity and sample efficiency advantages compared to previous designs, using an original simulation technique for correlated shot noise. We envision our proposal to encourage the exploration of continuous generative models running with few quantum resources.
Related papers
- Quantum model reduction for continuous-time quantum filters [0.0]
In many applications only part of the information contained in the filter's state is actually needed to reconstruct the target observable quantities.
We propose a systematic method to find, when possible, reduced-order quantum filters that are capable of exactly reproducing the evolution of expectation values of interest.
arXiv Detail & Related papers (2025-01-23T17:57:52Z) - Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.
This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.
The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Parameterized quantum circuits as universal generative models for continuous multivariate distributions [1.118478900782898]
ized quantum circuits have been extensively used as the basis for machine learning models in regression, classification, and generative tasks.
In this work, we elucidate expectation value sampling-based models and prove the universality of such variational quantum algorithms.
Our results may help guide the design of future quantum circuits in generative modelling tasks.
arXiv Detail & Related papers (2024-02-15T10:08:31Z) - Adaptive Conditional Quantile Neural Processes [9.066817971329899]
Conditional Quantile Neural Processes (CQNPs) are a new member of the neural processes family.
We introduce an extension of quantile regression where the model learns to focus on estimating informative quantiles.
Experiments with real and synthetic datasets demonstrate substantial improvements in predictive performance.
arXiv Detail & Related papers (2023-05-30T06:19:19Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Zero-shot Adversarial Quantization [11.722728148523366]
We propose a zero-shot adversarial quantization (ZAQ) framework, facilitating effective discrepancy estimation and knowledge transfer.
This is achieved by a novel two-level discrepancy modeling to drive a generator to synthesize informative and diverse data examples.
We conduct extensive experiments on three fundamental vision tasks, demonstrating the superiority of ZAQ over the strong zero-shot baselines.
arXiv Detail & Related papers (2021-03-29T01:33:34Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.