Human-centered XAI for Burn Depth Characterization
- URL: http://arxiv.org/abs/2210.13535v1
- Date: Mon, 24 Oct 2022 18:37:52 GMT
- Title: Human-centered XAI for Burn Depth Characterization
- Authors: Maxwell J. Jacobson, Daniela Chanci Arrubla, Maria Romeo Tricas, Gayle
Gordillo, Yexiang Xue, Chandan Sen, Juan Wachs
- Abstract summary: Burn injury classification is an important aspect of the medical AI field.
We propose an explainable human-in-the-loop framework for improving burn ultrasound classification models.
We show improvements in the accuracy of burn depth classification -- from 88% to 94% -- once modified according to our framework.
- Score: 8.967153054343775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Approximately 1.25 million people in the United States are treated each year
for burn injuries. Precise burn injury classification is an important aspect of
the medical AI field. In this work, we propose an explainable human-in-the-loop
framework for improving burn ultrasound classification models. Our framework
leverages an explanation system based on the LIME classification explainer to
corroborate and integrate a burn expert's knowledge -- suggesting new features
and ensuring the validity of the model. Using this framework, we discover that
B-mode ultrasound classifiers can be enhanced by supplying textural features.
More specifically, we confirm that texture features based on the Gray Level
Co-occurance Matrix (GLCM) of ultrasound frames can increase the accuracy of
transfer learned burn depth classifiers. We test our hypothesis on real data
from porcine subjects. We show improvements in the accuracy of burn depth
classification -- from ~88% to ~94% -- once modified according to our
framework.
Related papers
- Boundary Attention Mapping (BAM): Fine-grained saliency maps for
segmentation of Burn Injuries [1.4424150304888417]
Burn injuries can result from mechanisms such as thermal, chemical, and electrical insults.
Currently, the primary approach for burn assessments, via visual and tactile observations, is approximately 60%-80% accurate.
We introduce a machine learning pipeline for assessing burn severities and segmenting the regions of skin that are affected by burn.
arXiv Detail & Related papers (2023-05-24T17:15:19Z) - Venn Diagram Multi-label Class Interpretation of Diabetic Foot Ulcer
with Color and Sharpness Enhancement [8.16095457838169]
DFU is a severe complication of diabetes that can lead to amputation of the lower limb if not treated properly.
We propose a Venn Diagram interpretation of multi-label CNN-based method, utilizing different image enhancement strategies, to improve the multi-class DFU classification.
Our proposed approach outperforms existing approaches and achieves Macro-Average F1, Recall and Precision scores of 0.6592, 0.6593, and 0.6652, respectively.
arXiv Detail & Related papers (2023-05-01T19:06:28Z) - Semantic Latent Space Regression of Diffusion Autoencoders for Vertebral
Fracture Grading [72.45699658852304]
This paper proposes a novel approach to train a generative Diffusion Autoencoder model as an unsupervised feature extractor.
We model fracture grading as a continuous regression, which is more reflective of the smooth progression of fractures.
Importantly, the generative nature of our method allows us to visualize different grades of a given vertebra, providing interpretability and insight into the features that contribute to automated grading.
arXiv Detail & Related papers (2023-03-21T17:16:01Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - A deep learning model for burn depth classification using ultrasound
imaging [0.0]
This paper presents a deep convolutional neural network to classify burn depth based on altered tissue morphology of burned skin.
The network learns a low-dimensional manifold of the unburned skin images using an encoder-decoder architecture.
The performance metrics obtained from 20-fold cross-validation show that the model can identify deep-partial thickness burns.
arXiv Detail & Related papers (2022-03-29T20:01:22Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Multiclass Burn Wound Image Classification Using Deep Convolutional
Neural Networks [0.0]
Continuous wound monitoring is important for wound specialists to allow more accurate diagnosis and optimization of management protocols.
In this study, we use a deep learning-based method to classify burn wound images into two or three different categories based on the wound conditions.
arXiv Detail & Related papers (2021-03-01T23:54:18Z) - Classification of Breast Cancer Lesions in Ultrasound Images by using
Attention Layer and loss Ensembles in Deep Convolutional Neural Networks [0.0]
We propose a new framework for classification of breast cancer lesions by use of an attention module in modified VGG16 architecture.
We also proposed new ensembled loss function which is the combination of binary cross-entropy and logarithm of the hyperbolic cosine loss to improve the model discrepancy between classified lesions and its labels.
The proposed model in this study outperformed other modified VGG16 architectures with the accuracy of 93% and also the results are competitive with other state of the art frameworks for classification of breast cancer lesions.
arXiv Detail & Related papers (2021-02-23T06:49:12Z) - Grading Loss: A Fracture Grade-based Metric Loss for Vertebral Fracture
Detection [58.984536305767996]
We propose a representation learning-inspired approach for automated vertebral fracture detection.
We present a novel Grading Loss for learning representations that respect Genant's fracture grading scheme.
On a publicly available spine dataset, the proposed loss function achieves a fracture detection F1 score of 81.5%.
arXiv Detail & Related papers (2020-08-18T10:03:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.