Application of Quantum Pre-Processing Filter for Binary Image
Classification with Small Samples
- URL: http://arxiv.org/abs/2308.14930v1
- Date: Mon, 28 Aug 2023 23:08:32 GMT
- Title: Application of Quantum Pre-Processing Filter for Binary Image
Classification with Small Samples
- Authors: Farina Riaz and Shahab Abdulla and Hajime Suzuki and Srinjoy Ganguly
and Ravinesh C. Deo and Susan Hopkins
- Abstract summary: We investigated the application of our proposed quantum pre-processing filter (QPF) to binary image classification.
We evaluated the QPF on four datasets: MNIST (handwritten digits), EMNIST (handwritten digits and alphabets), CIFAR-10 (photographic images) and GTSRB (real-life traffic sign images)
- Score: 1.2965700352825555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past few years, there has been significant interest in Quantum
Machine Learning (QML) among researchers, as it has the potential to transform
the field of machine learning. Several models that exploit the properties of
quantum mechanics have been developed for practical applications. In this
study, we investigated the application of our previously proposed quantum
pre-processing filter (QPF) to binary image classification. We evaluated the
QPF on four datasets: MNIST (handwritten digits), EMNIST (handwritten digits
and alphabets), CIFAR-10 (photographic images) and GTSRB (real-life traffic
sign images). Similar to our previous multi-class classification results, the
application of QPF improved the binary image classification accuracy using
neural network against MNIST, EMNIST, and CIFAR-10 from 98.9% to 99.2%, 97.8%
to 98.3%, and 71.2% to 76.1%, respectively, but degraded it against GTSRB from
93.5% to 92.0%. We then applied QPF in cases using a smaller number of training
and testing samples, i.e. 80 and 20 samples per class, respectively. In order
to derive statistically stable results, we conducted the experiment with 100
trials choosing randomly different training and testing samples and averaging
the results. The result showed that the application of QPF did not improve the
image classification accuracy against MNIST and EMNIST but improved it against
CIFAR-10 and GTSRB from 65.8% to 67.2% and 90.5% to 91.8%, respectively.
Further research will be conducted as part of future work to investigate the
potential of QPF to assess the scalability of the proposed approach to larger
and complex datasets.
Related papers
- Stacking-Enhanced Bagging Ensemble Learning for Breast Cancer Classification with CNN [0.24578723416255752]
This paper proposes a CNN classification network based on Bagging and stacking ensemble learning methods for breast cancer classification.
The model is capable of fast and accurate classification of input images.
For binary classification (presence or absence of breast cancer), the accuracy reached 98.84%, and for five-class classification, the accuracy reached 98.34%.
arXiv Detail & Related papers (2024-07-15T09:44:43Z) - Development of a Novel Quantum Pre-processing Filter to Improve Image
Classification Accuracy of Neural Network Models [1.2965700352825555]
This paper proposes a novel quantum pre-processing filter (QPF) to improve the image classification accuracy of neural network (NN) models.
The results show that the image classification accuracy based on the MNIST (handwritten 10 digits) and the EMNIST (handwritten 47 class digits and letters) datasets can be improved.
However, tests performed on the developed QPF approach against a relatively complex GTSRB dataset with 43 distinct class real-life traffic sign images showed a degradation in the classification accuracy.
arXiv Detail & Related papers (2023-08-22T01:27:04Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - Uncertainty-inspired Open Set Learning for Retinal Anomaly
Identification [71.06194656633447]
We establish an uncertainty-inspired open-set (UIOS) model, which was trained with fundus images of 9 retinal conditions.
Our UIOS model with thresholding strategy achieved an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set.
UIOS correctly predicted high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images.
arXiv Detail & Related papers (2023-04-08T10:47:41Z) - CoV-TI-Net: Transferred Initialization with Modified End Layer for
COVID-19 Diagnosis [5.546855806629448]
Transfer learning is a relatively new learning method that has been employed in many sectors to achieve good performance with fewer computations.
In this research, the PyTorch pre-trained models (VGG19_bn and WideResNet -101) are applied in the MNIST dataset.
The proposed model is developed and verified in the Kaggle notebook, and it reached the outstanding accuracy of 99.77% without taking a huge computational time.
arXiv Detail & Related papers (2022-09-20T08:52:52Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Core Risk Minimization using Salient ImageNet [53.616101711801484]
We introduce the Salient Imagenet dataset with more than 1 million soft masks localizing core and spurious features for all 1000 Imagenet classes.
Using this dataset, we first evaluate the reliance of several Imagenet pretrained models (42 total) on spurious features.
Next, we introduce a new learning paradigm called Core Risk Minimization (CoRM) whose objective ensures that the model predicts a class using its core features.
arXiv Detail & Related papers (2022-03-28T01:53:34Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Hybrid quantum convolutional neural networks model for COVID-19
prediction using chest X-Ray images [13.094997642327371]
A model to predict COVID-19 via Chest X-Ray (CXR) images with accurate performance is necessary to help in early diagnosis.
In this paper, a hybrid quantum-classical convolutional Neural Networks (HQCNN) model used the random quantum circuits (RQCs) as a base to detect COVID-19 patients.
The proposed HQCNN model achieved higher performance with an accuracy of 98.4% and a sensitivity of 99.3% on the first dataset cases.
arXiv Detail & Related papers (2021-02-08T18:22:53Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Assessing Graph-based Deep Learning Models for Predicting Flash Point [52.931492216239995]
Graph-based deep learning (GBDL) models were implemented in predicting flash point for the first time.
Average R2 and Mean Absolute Error (MAE) scores of MPNN are, respectively, 2.3% lower and 2.0 K higher than previous comparable studies.
arXiv Detail & Related papers (2020-02-26T06:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.