$ζ$-QVAE: A Quantum Variational Autoencoder utilizing Regularized Mixed-state Latent Representations
- URL: http://arxiv.org/abs/2402.17749v2
- Date: Fri, 2 Aug 2024 19:13:17 GMT
- Title: $ζ$-QVAE: A Quantum Variational Autoencoder utilizing Regularized Mixed-state Latent Representations
- Authors: Gaoyuan Wang, Jonathan Warrell, Prashant S. Emani, Mark Gerstein,
- Abstract summary: A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources.
We present a fully quantum framework, $zeta$-QVAE, which encompasses all the capabilities of classical VAEs.
Our results consistently indicate that $zeta$-QVAE exhibits similar or better performance compared to matched classical models.
- Score: 1.0687104237121408
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A major challenge in near-term quantum computing is its application to large real-world datasets due to scarce quantum hardware resources. One approach to enabling tractable quantum models for such datasets involves compressing the original data to manageable dimensions while still representing essential information for downstream analysis. In classical machine learning, variational autoencoders (VAEs) facilitate efficient data compression, representation learning for subsequent tasks, and novel data generation. However, no model has been proposed that exactly captures all of these features for direct application to quantum data on quantum computers. Some existing quantum models for data compression lack regularization of latent representations, thus preventing direct use for generation and control of generalization. Others are hybrid models with only some internal quantum components, impeding direct training on quantum data. To bridge this gap, we present a fully quantum framework, $\zeta$-QVAE, which encompasses all the capabilities of classical VAEs and can be directly applied for both classical and quantum data compression. Our model utilizes regularized mixed states to attain optimal latent representations. It accommodates various divergences for reconstruction and regularization. Furthermore, by accommodating mixed states at every stage, it can utilize the full-data density matrix and allow for a "global" training objective. Doing so, in turn, makes efficient optimization possible and has potential implications for private and federated learning. In addition to exploring the theoretical properties of $\zeta$-QVAE, we demonstrate its performance on representative genomics and synthetic data. Our results consistently indicate that $\zeta$-QVAE exhibits similar or better performance compared to matched classical models.
Related papers
- Training quantum machine learning models on cloud without uploading the data [0.0]
We propose a method that runs the parameterized quantum circuits before encoding the input data.
This enables a dataset owner to train machine learning models on quantum cloud platforms.
It is also capable of encoding a vast amount of data effectively at a later time using classical computations.
arXiv Detail & Related papers (2024-09-06T20:14:52Z) - Quantum Transfer Learning for MNIST Classification Using a Hybrid Quantum-Classical Approach [0.0]
This research explores the integration of quantum computing with classical machine learning for image classification tasks.
We propose a hybrid quantum-classical approach that leverages the strengths of both paradigms.
The experimental results indicate that while the hybrid model demonstrates the feasibility of integrating quantum computing with classical techniques, the accuracy of the final model, trained on quantum outcomes, is currently lower than the classical model trained on compressed features.
arXiv Detail & Related papers (2024-08-05T22:16:27Z) - Disentangling Quantum and Classical Contributions in Hybrid Quantum
Machine Learning Architectures [4.646930308096446]
Hybrid transfer learning solutions have been developed, merging pre-trained classical models with quantum circuits.
It remains unclear how much each component -- classical and quantum -- contributes to the model's results.
We propose a novel hybrid architecture: instead of utilizing a pre-trained network for compression, we employ an autoencoder to derive a compressed version of the input data.
arXiv Detail & Related papers (2023-11-09T18:13:50Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Variational quantum regression algorithm with encoded data structure [0.21756081703276003]
We construct a quantum regression algorithm wherein the quantum state directly encodes the classical data table.
We show for the first time explicitly how the linkage of the classical data structure can be taken advantage of directly through quantum subroutines.
arXiv Detail & Related papers (2023-07-07T00:30:16Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Entangled Datasets for Quantum Machine Learning [0.0]
We argue that one should instead employ quantum datasets composed of quantum states.
We show how a quantum neural network can be trained to generate the states in the NTangled dataset.
We also consider an alternative entanglement-based dataset, which is scalable and is composed of states prepared by quantum circuits.
arXiv Detail & Related papers (2021-09-08T02:20:13Z) - Nearest Centroid Classification on a Trapped Ion Quantum Computer [57.5195654107363]
We design a quantum Nearest Centroid classifier, using techniques for efficiently loading classical data into quantum states and performing distance estimations.
We experimentally demonstrate it on a 11-qubit trapped-ion quantum machine, matching the accuracy of classical nearest centroid classifiers for the MNIST handwritten digits dataset and achieving up to 100% accuracy for 8-dimensional synthetic data.
arXiv Detail & Related papers (2020-12-08T01:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.