A deep learning model for burn depth classification using ultrasound
imaging
- URL: http://arxiv.org/abs/2203.15879v1
- Date: Tue, 29 Mar 2022 20:01:22 GMT
- Title: A deep learning model for burn depth classification using ultrasound
imaging
- Authors: Sangrock Lee, Rahul, James Lukan, Tatiana Boyko, Kateryna Zelenova,
Basiel Makled, Conner Parsey, Jack Norfleet, and Suvranu De
- Abstract summary: This paper presents a deep convolutional neural network to classify burn depth based on altered tissue morphology of burned skin.
The network learns a low-dimensional manifold of the unburned skin images using an encoder-decoder architecture.
The performance metrics obtained from 20-fold cross-validation show that the model can identify deep-partial thickness burns.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identification of burn depth with sufficient accuracy is a challenging
problem. This paper presents a deep convolutional neural network to classify
burn depth based on altered tissue morphology of burned skin manifested as
texture patterns in the ultrasound images. The network first learns a
low-dimensional manifold of the unburned skin images using an encoder-decoder
architecture that reconstructs it from ultrasound images of burned skin. The
encoder is then re-trained to classify burn depths. The encoder-decoder network
is trained using a dataset comprised of B-mode ultrasound images of unburned
and burned ex vivo porcine skin samples. The classifier is developed using
B-mode images of burned in situ skin samples obtained from freshly euthanized
postmortem pigs. The performance metrics obtained from 20-fold cross-validation
show that the model can identify deep-partial thickness burns, which is the
most difficult to diagnose clinically, with 99% accuracy, 98% sensitivity, and
100% specificity. The diagnostic accuracy of the classifier is further
illustrated by the high area under the curve values of 0.99 and 0.95,
respectively, for the receiver operating characteristic and precision-recall
curves. A post hoc explanation indicates that the classifier activates the
discriminative textural features in the B-mode images for burn classification.
The proposed model has the potential for clinical utility in assisting the
clinical assessment of burn depths using a widely available clinical imaging
device.
Related papers
- A Multi-Scale Framework for Out-of-Distribution Detection in Dermoscopic
Images [10.20384144853726]
We propose a multi-scale detection framework to detect out-of-distribution skin disease image data.
Our framework extracts features from different layers of the neural network.
Experiments show that the proposed framework achieves superior performance when compared with other state-of-the-art methods.
arXiv Detail & Related papers (2023-01-18T13:49:35Z) - Human-centered XAI for Burn Depth Characterization [8.967153054343775]
Burn injury classification is an important aspect of the medical AI field.
We propose an explainable human-in-the-loop framework for improving burn ultrasound classification models.
We show improvements in the accuracy of burn depth classification -- from 88% to 94% -- once modified according to our framework.
arXiv Detail & Related papers (2022-10-24T18:37:52Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Multiclass Burn Wound Image Classification Using Deep Convolutional
Neural Networks [0.0]
Continuous wound monitoring is important for wound specialists to allow more accurate diagnosis and optimization of management protocols.
In this study, we use a deep learning-based method to classify burn wound images into two or three different categories based on the wound conditions.
arXiv Detail & Related papers (2021-03-01T23:54:18Z) - Automatic Recognition of the Supraspinatus Tendinopathy from Ultrasound
Images using Convolutional Neural Networks [1.021325814813899]
An automatic tendinopathy recognition framework based on convolutional neural networks has been proposed.
Tendon segmentation is done through a novel network, NASUNet.
A general classification pipeline has been proposed for tendinopathy recognition.
arXiv Detail & Related papers (2020-11-23T22:41:41Z) - Leveraging Adaptive Color Augmentation in Convolutional Neural Networks
for Deep Skin Lesion Segmentation [0.0]
We propose an adaptive color augmentation technique to amplify data expression and model performance.
We qualitatively identify and verify the semantic structural features learned by the network for discriminating skin lesions against normal skin tissue.
arXiv Detail & Related papers (2020-10-31T00:16:23Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.