Hybrid Model using Feature Extraction and Non-linear SVM for Brain Tumor
Classification
- URL: http://arxiv.org/abs/2212.02794v1
- Date: Tue, 6 Dec 2022 07:15:37 GMT
- Title: Hybrid Model using Feature Extraction and Non-linear SVM for Brain Tumor
Classification
- Authors: Lalita Mishra, Shekhar Verma, Shirshu Varma
- Abstract summary: We propose a hybrid model, using VGG along with SVM (Soft and Hard) to classify the brain tumors.
The VGG models are trained via the PyTorch python library to obtain the highest testing accuracy of tumor classification.
Results indicate that the hybrid VGG-SVM model, especially VGG 19 with SVM, is able to outperform existing techniques and achieve high accuracy.
- Score: 3.222802562733787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is essential to classify brain tumors from magnetic resonance imaging
(MRI) accurately for better and timely treatment of the patients. In this
paper, we propose a hybrid model, using VGG along with Nonlinear-SVM (Soft and
Hard) to classify the brain tumors: glioma and pituitary and tumorous and
non-tumorous. The VGG-SVM model is trained for two different datasets of two
classes; thus, we perform binary classification. The VGG models are trained via
the PyTorch python library to obtain the highest testing accuracy of tumor
classification. The method is threefold, in the first step, we normalize and
resize the images, and the second step consists of feature extraction through
variants of the VGG model. The third step classified brain tumors using
non-linear SVM (soft and hard). We have obtained 98.18% accuracy for the first
dataset and 99.78% for the second dataset using VGG19. The classification
accuracies for non-linear SVM are 95.50% and 97.98% with linear and rbf kernel
and 97.95% for soft SVM with RBF kernel with D1, and 96.75% and 98.60% with
linear and RBF kernel and 98.38% for soft SVM with RBF kernel with D2. Results
indicate that the hybrid VGG-SVM model, especially VGG 19 with SVM, is able to
outperform existing techniques and achieve high accuracy.
Related papers
- Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Hybrid Window Attention Based Transformer Architecture for Brain Tumor
Segmentation [28.650980942429726]
We propose a volumetric vision transformer that follows two windowing strategies in attention for extracting fine features.
We trained and evaluated network architecture on the FeTS Challenge 2022 dataset.
Our performance on the online validation dataset is as follows: Dice Similarity Score of 81.71%, 91.38% and 85.40%.
arXiv Detail & Related papers (2022-09-16T03:55:48Z) - COVID-19 Severity Classification on Chest X-ray Images [0.0]
In this work, we classify covid images based on the severity of the infection.
The ResNet-50 model produced remarkable classification results in terms of accuracy 95%, recall (0.94), and F1-Score (0.92), and precision (0.91)
arXiv Detail & Related papers (2022-05-25T12:01:03Z) - Federated Learning Enables Big Data for Rare Cancer Boundary Detection [98.5549882883963]
We present findings from the largest Federated ML study to-date, involving data from 71 healthcare institutions across 6 continents.
We generate an automatic tumor boundary detector for the rare disease of glioblastoma.
We demonstrate a 33% improvement over a publicly trained model to delineate the surgically targetable tumor, and 23% improvement over the tumor's entire extent.
arXiv Detail & Related papers (2022-04-22T17:27:00Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - A Machine Learning model of the combination of normalized SD1 and SD2
indexes from 24h-Heart Rate Variability as a predictor of myocardial
infarction [0.0]
We used the most common ML algorithms for accuracy comparison with a setting of 10-fold cross-validation.
The main findings of this study show that the combination of SD1nu + SD2nu has greater predictive power for MI in comparison to other HRV indexes.
arXiv Detail & Related papers (2021-02-18T14:57:49Z) - Twin Augmented Architectures for Robust Classification of COVID-19 Chest
X-Ray Images [6.127080932156285]
Gold standard for COVID-19 is RT-PCR, testing facilities for which are limited and not always optimally distributed.
We show that popular choices of dataset selection suffer from data homogeneity, leading to misleading results.
We introduce a state-of-the-art technique, termed as Twin Augmentation, for modifying popular pre-trained deep learning models.
arXiv Detail & Related papers (2021-02-16T06:50:17Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Classification of COVID-19 in Chest CT Images using Convolutional
Support Vector Machines [15.50817570408951]
This study presents a deep learning model that detects COVID-19 cases with high performance.
The proposed method is defined as Convolutional Support Vector Machine (CSVM) and can automatically classify Computed Tomography (CT) images.
When the performance of pre-trained CNN networks and CSVM models is assessed, CSVM (7x7, 3x3, 1x1) model shows the highest performance with 94.03% ACC, 96.09% SEN, 92.01% SPE, 92.19% PRE, 94.10% F1-Score, 88.15% MCC and 88.07% Kappa metric values.
arXiv Detail & Related papers (2020-11-11T13:04:38Z) - Modelling the Distribution of 3D Brain MRI using a 2D Slice VAE [66.63629641650572]
We propose a method to model 3D MR brain volumes distribution by combining a 2D slice VAE with a Gaussian model that captures the relationships between slices.
We also introduce a novel evaluation method for generated volumes that quantifies how well their segmentations match those of true brain anatomy.
arXiv Detail & Related papers (2020-07-09T13:23:15Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.