Multi-modal Wound Classification using Wound Image and Location by Deep
Neural Network
- URL: http://arxiv.org/abs/2109.06969v1
- Date: Tue, 14 Sep 2021 21:00:30 GMT
- Title: Multi-modal Wound Classification using Wound Image and Location by Deep
Neural Network
- Authors: D. M. Anisuzzaman, Yash Patel, Behrouz Rostami, Jeffrey Niezgoda,
Sandeep Gopalakrishnan, and Zeyun Yu
- Abstract summary: This study developed a deep neural network-based multi-modal classifier using wound images and their corresponding locations.
A body map is also developed to prepare the location data, which can help wound specialists tag wound locations more efficiently.
- Score: 2.25739374955489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wound classification is an essential step of wound diagnosis. An efficient
classifier can assist wound specialists in classifying wound types with less
financial and time costs and help them decide an optimal treatment procedure.
This study developed a deep neural network-based multi-modal classifier using
wound images and their corresponding locations to categorize wound images into
multiple classes, including diabetic, pressure, surgical, and venous ulcers. A
body map is also developed to prepare the location data, which can help wound
specialists tag wound locations more efficiently. Three datasets containing
images and their corresponding location information are designed with the help
of wound specialists. The multi-modal network is developed by concatenating the
image-based and location-based classifier's outputs with some other
modifications. The maximum accuracy on mixed-class classifications (containing
background and normal skin) varies from 77.33% to 100% on different
experiments. The maximum accuracy on wound-class classifications (containing
only diabetic, pressure, surgical, and venous) varies from 72.95% to 98.08% on
different experiments. The proposed multi-modal network also shows a
significant improvement in results from the previous works of literature.
Related papers
- Disease Classification and Impact of Pretrained Deep Convolution Neural Networks on Diverse Medical Imaging Datasets across Imaging Modalities [0.0]
This paper investigates the intricacies of using pretrained deep convolutional neural networks with transfer learning across diverse medical imaging datasets.
It shows that the use of pretrained models as fixed feature extractors yields poor performance irrespective of the datasets.
It is also found that deeper and more complex architectures did not necessarily result in the best performance.
arXiv Detail & Related papers (2024-08-30T04:51:19Z) - Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - Integrated Image and Location Analysis for Wound Classification: A Deep
Learning Approach [3.5427949413406563]
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods.
We introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories: diabetic, pressure, surgical, and venous ulcers.
A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging.
arXiv Detail & Related papers (2023-08-23T02:49:22Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Medical Knowledge-Guided Deep Learning for Imbalanced Medical Image
Classification [3.9745217005532183]
We propose a medical-knowledge-guided one-class classification approach to boost the model's performance.
We design a deep learning-based one-class classification pipeline for imbalanced image classification.
We show superior model performance when compared to six state-of-the-art methods.
arXiv Detail & Related papers (2021-11-20T16:14:19Z) - Multiclass Burn Wound Image Classification Using Deep Convolutional
Neural Networks [0.0]
Continuous wound monitoring is important for wound specialists to allow more accurate diagnosis and optimization of management protocols.
In this study, we use a deep learning-based method to classify burn wound images into two or three different categories based on the wound conditions.
arXiv Detail & Related papers (2021-03-01T23:54:18Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Multiclass Wound Image Classification using an Ensemble Deep CNN-based
Classifier [2.07811670193148]
We have developed an ensemble Deep Convolutional Neural Network-based classifier to classify wound images into multi-classes.
We obtained maximum and average classification accuracy values of 96.4% and 94.28% for binary and 91.9% and 87.7% for 3-class classification problems.
arXiv Detail & Related papers (2020-10-19T15:20:12Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.