Evaluating the Performance of Some Local Optimizers for Variational
Quantum Classifiers
- URL: http://arxiv.org/abs/2102.08949v1
- Date: Wed, 17 Feb 2021 16:31:42 GMT
- Title: Evaluating the Performance of Some Local Optimizers for Variational
Quantum Classifiers
- Authors: Nisheeth Joshi, Pragya Katyayan, Syed Afroz Ahmed
- Abstract summary: We studied the performance and role of locals in quantum variational circuits.
Results show that machine learning on noisy immediate scale quantum machines can produce comparable results as on classical machines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we have studied the performance and role of local optimizers
in quantum variational circuits. We studied the performance of the two most
popular optimizers and compared their results with some popular classical
machine learning algorithms. The classical algorithms we used in our study are
support vector machine (SVM), gradient boosting (GB), and random forest (RF).
These were compared with a variational quantum classifier (VQC) using two sets
of local optimizers viz AQGD and COBYLA. For experimenting with VQC, IBM
Quantum Experience and IBM Qiskit was used while for classical machine learning
models, sci-kit learn was used. The results show that machine learning on noisy
immediate scale quantum machines can produce comparable results as on classical
machines. For our experiments, we have used a popular restaurant sentiment
analysis dataset. The extracted features from this dataset and then after
applying PCA reduced the feature set into 5 features. Quantum ML models were
trained using 100 epochs and 150 epochs on using EfficientSU2 variational
circuit. Overall, four Quantum ML models were trained and three Classical ML
models were trained. The performance of the trained models was evaluated using
standard evaluation measures viz, Accuracy, Precision, Recall, F-Score. In all
the cases AQGD optimizer-based model with 100 Epochs performed better than all
other models. It produced an accuracy of 77% and an F-Score of 0.785 which were
highest across all the trained models.
Related papers
- Quantum Active Learning [3.3202982522589934]
Training a quantum neural network typically demands a substantial labeled training set for supervised learning.
QAL effectively trains the model, achieving performance comparable to that on fully labeled datasets.
We elucidate the negative result of QAL being overtaken by random sampling baseline through miscellaneous numerical experiments.
arXiv Detail & Related papers (2024-05-28T14:39:54Z) - Quantum Machine Learning for Credit Scoring [0.0]
We explore the use of quantum machine learning (QML) applied to credit scoring for small and medium-sized enterprises (SME)
A quantum/classical hybrid approach has been used with several models, activation functions, epochs and other parameters.
We observe significantly more efficient training for the quantum models over the classical models with the quantum model trained for 350 epochs compared to 3500 epochs for comparable prediction performance.
arXiv Detail & Related papers (2023-08-07T13:27:30Z) - A Framework for Demonstrating Practical Quantum Advantage: Racing
Quantum against Classical Generative Models [62.997667081978825]
We build over a proposed framework for evaluating the generalization performance of generative models.
We establish the first comparative race towards practical quantum advantage (PQA) between classical and quantum generative models.
Our results suggest that QCBMs are more efficient in the data-limited regime than the other state-of-the-art classical generative models.
arXiv Detail & Related papers (2023-03-27T22:48:28Z) - Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping [60.086820254217336]
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
arXiv Detail & Related papers (2022-12-22T13:23:19Z) - Towards a learning-based performance modeling for accelerating Deep
Neural Networks [1.1549572298362785]
We start an investigation of predictive models based on machine learning techniques in order to optimize Convolution Neural Networks (CNNs)
Preliminary experiments on Midgard-based ARM Mali GPU show that our predictive model outperforms all the convolution operators manually selected by the library.
arXiv Detail & Related papers (2022-12-09T18:28:07Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Binary classifiers for noisy datasets: a comparative study of existing
quantum machine learning frameworks and some new approaches [0.0]
We apply Quantum Machine Learning frameworks to improve binary classification.
noisy datasets are in financial datasets.
New models exhibit better learning characteristics to asymmetrical noise in the dataset.
arXiv Detail & Related papers (2021-11-05T10:29:05Z) - When Liebig's Barrel Meets Facial Landmark Detection: A Practical Model [87.25037167380522]
We propose a model that is accurate, robust, efficient, generalizable, and end-to-end trainable.
In order to achieve a better accuracy, we propose two lightweight modules.
DQInit dynamically initializes the queries of decoder from the inputs, enabling the model to achieve as good accuracy as the ones with multiple decoder layers.
QAMem is designed to enhance the discriminative ability of queries on low-resolution feature maps by assigning separate memory values to each query rather than a shared one.
arXiv Detail & Related papers (2021-05-27T13:51:42Z) - GEO: Enhancing Combinatorial Optimization with Classical and Quantum
Generative Models [62.997667081978825]
We introduce a new framework that leverages machine learning models known as generative models to solve optimization problems.
We focus on a quantum-inspired version of GEO relying on tensor-network Born machines.
We show its superior performance when the goal is to find the best minimum given a fixed budget for the number of function calls.
arXiv Detail & Related papers (2021-01-15T18:18:38Z) - Adiabatic Quantum Linear Regression [0.0]
We present an adiabatic quantum computing approach for training a linear regression model.
Our analysis shows that the quantum approach attains up to 2.8x speedup over the classical approach on larger datasets.
arXiv Detail & Related papers (2020-08-05T20:40:41Z) - APQ: Joint Search for Network Architecture, Pruning and Quantization
Policy [49.3037538647714]
We present APQ for efficient deep learning inference on resource-constrained hardware.
Unlike previous methods that separately search the neural architecture, pruning policy, and quantization policy, we optimize them in a joint manner.
With the same accuracy, APQ reduces the latency/energy by 2x/1.3x over MobileNetV2+HAQ.
arXiv Detail & Related papers (2020-06-15T16:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.