Explainable Contrastive and Cost-Sensitive Learning for Cervical Cancer
Classification
- URL: http://arxiv.org/abs/2402.15905v1
- Date: Sat, 24 Feb 2024 21:03:30 GMT
- Title: Explainable Contrastive and Cost-Sensitive Learning for Cervical Cancer
Classification
- Authors: Ashfiqun Mustari, Rushmia Ahmed, Afsara Tasnim, Jakia Sultana Juthi
and G M Shahariar
- Abstract summary: We first fine-tune five pre-trained CNNs and minimize the overall cost of misclassification.
supervised contrastive learning is included to make the models more adept at capturing important features and patterns.
The experimental results demonstrate the effectiveness of the developed system, achieving an accuracy of 97.29%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes an efficient system for classifying cervical cancer cells
using pre-trained convolutional neural networks (CNNs). We first fine-tune five
pre-trained CNNs and minimize the overall cost of misclassification by
prioritizing accuracy for certain classes that have higher associated costs or
importance. To further enhance the performance of the models, supervised
contrastive learning is included to make the models more adept at capturing
important features and patterns. Extensive experimentation are conducted to
evaluate the proposed system on the SIPaKMeD dataset. The experimental results
demonstrate the effectiveness of the developed system, achieving an accuracy of
97.29%. To make our system more trustworthy, we have employed several
explainable AI techniques to interpret how the models reached a specific
decision. The implementation of the system can be found at -
https://github.com/isha-67/CervicalCancerStudy.
Related papers
- Comparative Analysis and Ensemble Enhancement of Leading CNN Architectures for Breast Cancer Classification [0.0]
This study introduces a novel and accurate approach to breast cancer classification using histopathology images.
It systematically compares leading Convolutional Neural Network (CNN) models across varying image datasets.
Our findings establish the settings required to achieve exceptional classification accuracy for standalone CNN models.
arXiv Detail & Related papers (2024-10-04T11:31:43Z) - Enhancing Eye Disease Diagnosis with Deep Learning and Synthetic Data Augmentation [0.0]
In this paper, an ensemble learning technique is proposed for early detection and management of diabetic retinopathy.
The proposed model is tested on the APTOS dataset and it is showing supremacy on the validation accuracy ($99%)$ in comparison to the previous models.
arXiv Detail & Related papers (2024-07-25T04:09:17Z) - Which Augmentation Should I Use? An Empirical Investigation of Augmentations for Self-Supervised Phonocardiogram Representation Learning [5.438725298163702]
Contrastive Self-Supervised Learning (SSL) offers a potential solution to labeled data scarcity.
We propose uncovering the optimal augmentations for applying contrastive learning in 1D phonocardiogram (PCG) classification.
We demonstrate that depending on its training distribution, the effectiveness of a fully-supervised model can degrade up to 32%, while SSL models only lose up to 10% or even improve in some cases.
arXiv Detail & Related papers (2023-12-01T11:06:00Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Efficacy of Bayesian Neural Networks in Active Learning [11.609770399591516]
We show that Bayesian neural networks are more efficient than ensemble based techniques in capturing uncertainty.
Our findings also reveal some key drawbacks of the ensemble techniques, which was recently shown to be more effective than Monte Carlo dropouts.
arXiv Detail & Related papers (2021-04-02T06:02:11Z) - Deep Epidemiological Modeling by Black-box Knowledge Distillation: An
Accurate Deep Learning Model for COVID-19 [16.442483223157975]
We propose a novel deep learning approach using black-box knowledge distillation for both accurate and efficient transmission dynamics prediction.
We use simulated observation sequences to query the simulation system to retrieve simulated projection sequences as knowledge.
Finally, we train a student deep neural network with the retrieved and mixed observation-projection sequences for practical use.
arXiv Detail & Related papers (2021-01-20T19:49:00Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.