LesionAid: Vision Transformers-based Skin Lesion Generation and
Classification
- URL: http://arxiv.org/abs/2302.01104v1
- Date: Thu, 2 Feb 2023 13:52:54 GMT
- Title: LesionAid: Vision Transformers-based Skin Lesion Generation and
Classification
- Authors: Ghanta Sai Krishna, Kundrapu Supriya, Mallikharjuna Rao K, Meetiksha
Sorgile
- Abstract summary: This research proposes a novel multi-class prediction framework that classifies skin lesions based on ViT and ViTGAN.
The framework consists of four main phases: ViTGANs, Image processing, and explainable AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Skin cancer is one of the most prevalent forms of human cancer. It is
recognized mainly visually, beginning with clinical screening and continuing
with the dermoscopic examination, histological assessment, and specimen
collection. Deep convolutional neural networks (CNNs) perform highly segregated
and potentially universal tasks against a classified finegrained object. This
research proposes a novel multi-class prediction framework that classifies skin
lesions based on ViT and ViTGAN. Vision transformers-based GANs (Generative
Adversarial Networks) are utilized to tackle the class imbalance. The framework
consists of four main phases: ViTGANs, Image processing, and explainable AI.
Phase 1 consists of generating synthetic images to balance all the classes in
the dataset. Phase 2 consists of applying different data augmentation
techniques and morphological operations to increase the size of the data.
Phases 3 & 4 involve developing a ViT model for edge computing systems that can
identify patterns and categorize skin lesions from the user's skin visible in
the image. In phase 3, after classifying the lesions into the desired class
with ViT, we will use explainable AI (XAI) that leads to more explainable
results (using activation maps, etc.) while ensuring high predictive accuracy.
Real-time images of skin diseases can capture by a doctor or a patient using
the camera of a mobile application to perform an early examination and
determine the cause of the skin lesion. The whole framework is compared with
the existing frameworks for skin lesion detection.
Related papers
- FairSkin: Fair Diffusion for Skin Disease Image Generation [54.29840149709033]
Diffusion Model (DM) has become a leading method in generating synthetic medical images, but it suffers from a critical twofold bias.
We propose FairSkin, a novel DM framework that mitigates these biases through a three-level resampling mechanism.
Our approach significantly improves the diversity and quality of generated images, contributing to more equitable skin disease detection in clinical settings.
arXiv Detail & Related papers (2024-10-29T21:37:03Z) - S-SYNTH: Knowledge-Based, Synthetic Generation of Skin Images [2.79604239303318]
We propose S-SYNTH, the first knowledge-based, adaptable open-source skin simulation framework.
We generate synthetic skin, 3D models and digitally rendered images using an anatomically inspired multi-layer, multi-representation skin and growing lesion model.
We show that results obtained using synthetic data follow similar comparative trends as real dermatologic images.
arXiv Detail & Related papers (2024-07-31T23:16:29Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - HierAttn: Effectively Learn Representations from Stage Attention and
Branch Attention for Skin Lesions Diagnosis [18.026088450803258]
An accurate and unbiased examination of skin lesions is critical for the early diagnosis and treatment of skin cancers.
Recent studies have developed ensembled convolutional neural networks (CNNs) to classify the images for early diagnosis.
We introduce HierAttn, a lite and effective neural network with hierarchical and self attention.
arXiv Detail & Related papers (2022-05-09T14:30:34Z) - Classification of Skin Cancer Images using Convolutional Neural Networks [0.0]
Skin cancer is the most common human malignancy.
Deep neural networks show humongous potential for image classification.
Highest model accuracy achieved was over 86.65%.
arXiv Detail & Related papers (2022-02-01T17:11:41Z) - Dermoscopic Image Classification with Neural Style Transfer [5.314466196448187]
We propose an adaptation of the Neural Style Transfer (NST) as a novel image pre-processing step for skin lesion classification problems.
We represent each dermoscopic image as the style image and transfer the style of the lesion onto a homogeneous content image.
This transfers the main variability of each lesion onto the same localized region, which allows us to integrate the generated images together and extract latent, low-rank style features.
arXiv Detail & Related papers (2021-05-17T03:50:51Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Analysis of skin lesion images with deep learning [0.0]
We evaluate the current state of the art in the classification of dermoscopic images.
Various deep neural network architectures pre-trained on the ImageNet data set are adapted to a combined training data set.
The performance and applicability of these models for the detection of eight classes of skin lesions are examined.
arXiv Detail & Related papers (2021-01-11T10:58:36Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.