Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders
- URL: http://arxiv.org/abs/2409.03350v1
- Date: Thu, 5 Sep 2024 08:51:20 GMT
- Title: Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders
- Authors: G. Maragkopoulos, A. Mandilara, A. Tsili, D. Syvridis,
- Abstract summary: We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variational Quantum Circuits (VQC) lie at the forefront of quantum machine learning research. Still, the use of quantum networks for real data processing remains challenging as the number of available qubits cannot accommodate a large dimensionality of data --if the usual angle encoding scenario is used. To achieve dimensionality reduction, Principal Component Analysis is routinely applied as a pre-processing method before the embedding of the classical features on qubits. In this work, we propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after. This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension. At a second step, we propose a quantum inspired classical autoencoder model which can be used to encode information in low latent spaces. The power of our proposed models is exhibited via numerical tests. We show that our targeted dimensionality reduction method considerably boosts VQC's performance and we also identify cases for which the second model outperforms classical linear autoencoders in terms of reconstruction loss.
Related papers
- Integrated Encoding and Quantization to Enhance Quanvolutional Neural Networks [2.789685107745028]
We propose two ways to enhance the efficiency of quanvolutional models.
First, we propose a flexible data quantization approach with memoization, applicable to any encoding method.
Second, we introduce a new integrated encoding strategy, which combines the encoding and processing steps in a single circuit.
arXiv Detail & Related papers (2024-10-08T07:57:13Z) - Scalable quantum dynamics compilation via quantum machine learning [7.31922231703204]
variational quantum compilation (VQC) methods employ variational optimization to reduce gate costs while maintaining high accuracy.
We show that our approach exceeds state-of-the-art compilation results in both system size and accuracy in one dimension ($1$D)
For the first time, we extend VQC to systems on two-dimensional (2D) strips with a quasi-1D treatment, demonstrating a significant resource advantage over standard Trotterization methods.
arXiv Detail & Related papers (2024-09-24T18:00:00Z) - Patch-Based End-to-End Quantum Learning Network for Reduction and Classification of Classical Data [0.22099217573031676]
In the noisy intermediate scale quantum (NISQ) era, the control over the qubits is limited due to errors caused by quantum decoherence, crosstalk, and imperfect calibration.
It is necessary to reduce the size of the large-scale classical data, such as images, when they are to be processed by quantum networks.
In this paper, a dynamic patch-based quantum domain data reduction network with a classical attention mechanism is proposed to avoid such data reductions.
arXiv Detail & Related papers (2024-09-23T16:58:02Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantum Transfer Learning for MNIST Classification Using a Hybrid Quantum-Classical Approach [0.0]
This research explores the integration of quantum computing with classical machine learning for image classification tasks.
We propose a hybrid quantum-classical approach that leverages the strengths of both paradigms.
The experimental results indicate that while the hybrid model demonstrates the feasibility of integrating quantum computing with classical techniques, the accuracy of the final model, trained on quantum outcomes, is currently lower than the classical model trained on compressed features.
arXiv Detail & Related papers (2024-08-05T22:16:27Z) - Guided Quantum Compression for Higgs Identification [0.0]
Quantum machine learning provides a fundamentally novel and promising approach to analyzing data.
We show that using a classical auto-encoder as an independent preprocessing step can significantly decrease the classification performance of a quantum machine learning algorithm.
We design an architecture that unifies the preprocessing and quantum classification algorithms into a single trainable model: the guided quantum compression model.
arXiv Detail & Related papers (2024-02-14T19:01:51Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - Classical-to-Quantum Transfer Learning Facilitates Machine Learning with Variational Quantum Circuit [62.55763504085508]
We prove that a classical-to-quantum transfer learning architecture using a Variational Quantum Circuit (VQC) improves the representation and generalization (estimation error) capabilities of the VQC model.
We show that the architecture of classical-to-quantum transfer learning leverages pre-trained classical generative AI models, making it easier to find the optimal parameters for the VQC in the training stage.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.