CoV-TI-Net: Transferred Initialization with Modified End Layer for
COVID-19 Diagnosis
- URL: http://arxiv.org/abs/2209.09556v1
- Date: Tue, 20 Sep 2022 08:52:52 GMT
- Title: CoV-TI-Net: Transferred Initialization with Modified End Layer for
COVID-19 Diagnosis
- Authors: Sadia Khanam, Mohammad Reza Chalak Qazani, Subrota Kumar Mondal, H M
Dipu Kabir, Abadhan S. Sabyasachi, Houshyar Asadi, Keshav Kumar, Farzin
Tabarsinezhad, Shady Mohamed, Abbas Khorsavi, Saeid Nahavandi
- Abstract summary: Transfer learning is a relatively new learning method that has been employed in many sectors to achieve good performance with fewer computations.
In this research, the PyTorch pre-trained models (VGG19_bn and WideResNet -101) are applied in the MNIST dataset.
The proposed model is developed and verified in the Kaggle notebook, and it reached the outstanding accuracy of 99.77% without taking a huge computational time.
- Score: 5.546855806629448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes transferred initialization with modified fully connected
layers for COVID-19 diagnosis. Convolutional neural networks (CNN) achieved a
remarkable result in image classification. However, training a high-performing
model is a very complicated and time-consuming process because of the
complexity of image recognition applications. On the other hand, transfer
learning is a relatively new learning method that has been employed in many
sectors to achieve good performance with fewer computations. In this research,
the PyTorch pre-trained models (VGG19\_bn and WideResNet -101) are applied in
the MNIST dataset for the first time as initialization and with modified fully
connected layers. The employed PyTorch pre-trained models were previously
trained in ImageNet. The proposed model is developed and verified in the Kaggle
notebook, and it reached the outstanding accuracy of 99.77% without taking a
huge computational time during the training process of the network. We also
applied the same methodology to the SIIM-FISABIO-RSNA COVID-19 Detection
dataset and achieved 80.01% accuracy. In contrast, the previous methods need a
huge compactional time during the training process to reach a high-performing
model. Codes are available at the following link:
github.com/dipuk0506/SpinalNet
Related papers
- Enhancing pretraining efficiency for medical image segmentation via transferability metrics [0.0]
In medical image segmentation tasks, the scarcity of labeled training data poses a significant challenge.
We introduce a novel transferability metric, based on contrastive learning, that measures how robustly a pretrained model is able to represent the target data.
arXiv Detail & Related papers (2024-10-24T12:11:52Z) - Effective pruning of web-scale datasets based on complexity of concept
clusters [48.125618324485195]
We present a method for pruning large-scale multimodal datasets for training CLIP-style models on ImageNet.
We find that training on a smaller set of high-quality data can lead to higher performance with significantly lower training costs.
We achieve a new state-of-the-art Imagehttps://info.arxiv.org/help/prep#commentsNet zero-shot accuracy and a competitive average zero-shot accuracy on 38 evaluation tasks.
arXiv Detail & Related papers (2024-01-09T14:32:24Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Targeted Gradient Descent: A Novel Method for Convolutional Neural
Networks Fine-tuning and Online-learning [9.011106198253053]
A convolutional neural network (ConvNet) is usually trained and then tested using images drawn from the same distribution.
To generalize a ConvNet to various tasks often requires a complete training dataset that consists of images drawn from different tasks.
We present Targeted Gradient Descent (TGD), a novel fine-tuning method that can extend a pre-trained network to a new task without revisiting data from the previous task.
arXiv Detail & Related papers (2021-09-29T21:22:09Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - Question Type Classification Methods Comparison [0.0]
The paper presents a comparative study of state-of-the-art approaches for question classification task: Logistic Regression, Convolutional Neural Networks (CNN), Long Short-Term Memory Network (LSTM) and Quasi-Recurrent Neural Networks (QRNN)
All models use pre-trained GLoVe word embeddings and trained on human-labeled data.
The best accuracy is achieved using CNN model with five convolutional layers and various kernel sizes stacked in parallel, followed by one fully connected layer.
arXiv Detail & Related papers (2020-01-03T00:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.