Self-Learning AI Framework for Skin Lesion Image Segmentation and
Classification
- URL: http://arxiv.org/abs/2001.05838v1
- Date: Sat, 4 Jan 2020 09:31:11 GMT
- Title: Self-Learning AI Framework for Skin Lesion Image Segmentation and
Classification
- Authors: Anandhanarayanan Kamalakannan, Shiva Shankar Ganesan and Govindaraj
Rajamanickam
- Abstract summary: To perform medical image segmentation with deep learning models, it requires training on large image dataset with annotation.
To overcome this issue, self-learning annotation scheme was proposed in the two-stage deep learning algorithm.
The classification results of the proposed AI framework achieved training accuracy of 93.8% and testing accuracy of 82.42%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image segmentation and classification are the two main fundamental steps in
pattern recognition. To perform medical image segmentation or classification
with deep learning models, it requires training on large image dataset with
annotation. The dermoscopy images (ISIC archive) considered for this work does
not have ground truth information for lesion segmentation. Performing manual
labelling on this dataset is time-consuming. To overcome this issue,
self-learning annotation scheme was proposed in the two-stage deep learning
algorithm. The two-stage deep learning algorithm consists of U-Net segmentation
model with the annotation scheme and CNN classifier model. The annotation
scheme uses a K-means clustering algorithm along with merging conditions to
achieve initial labelling information for training the U-Net model. The
classifier models namely ResNet-50 and LeNet-5 were trained and tested on the
image dataset without segmentation for comparison and with the U-Net
segmentation for implementing the proposed self-learning Artificial
Intelligence (AI) framework. The classification results of the proposed AI
framework achieved training accuracy of 93.8% and testing accuracy of 82.42%
when compared with the two classifier models directly trained on the input
images.
Related papers
- Self-Supervised Learning in Deep Networks: A Pathway to Robust Few-Shot Classification [0.0]
We first pre-train the model with self-supervision to enable it to learn common feature expressions on a large amount of unlabeled data.
Then fine-tune it on the few-shot dataset Mini-ImageNet to improve the model's accuracy and generalization ability under limited data.
arXiv Detail & Related papers (2024-11-19T01:01:56Z) - Annotation Cost-Efficient Active Learning for Deep Metric Learning Driven Remote Sensing Image Retrieval [3.2109665109975696]
ANNEAL aims to create a small but informative training set made up of similar and dissimilar image pairs.
The informativeness of image pairs is evaluated by combining uncertainty and diversity criteria.
This way of annotating images significantly reduces the annotation cost compared to annotating images with land-use land-cover class labels.
arXiv Detail & Related papers (2024-06-14T15:08:04Z) - Image-free Classifier Injection for Zero-Shot Classification [72.66409483088995]
Zero-shot learning models achieve remarkable results on image classification for samples from classes that were not seen during training.
We aim to equip pre-trained models with zero-shot classification capabilities without the use of image data.
We achieve this with our proposed Image-free Injection with Semantics (ICIS)
arXiv Detail & Related papers (2023-08-21T09:56:48Z) - An Explainable Model-Agnostic Algorithm for CNN-based Biometrics
Verification [55.28171619580959]
This paper describes an adaptation of the Local Interpretable Model-Agnostic Explanations (LIME) AI method to operate under a biometric verification setting.
arXiv Detail & Related papers (2023-07-25T11:51:14Z) - Prompt Tuning for Parameter-efficient Medical Image Segmentation [79.09285179181225]
We propose and investigate several contributions to achieve a parameter-efficient but effective adaptation for semantic segmentation on two medical imaging datasets.
We pre-train this architecture with a dedicated dense self-supervision scheme based on assignments to online generated prototypes.
We demonstrate that the resulting neural network model is able to attenuate the gap between fully fine-tuned and parameter-efficiently adapted models.
arXiv Detail & Related papers (2022-11-16T21:55:05Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.