HGT: A Hierarchical GCN-Based Transformer for Multimodal Periprosthetic
Joint Infection Diagnosis Using CT Images and Text
- URL: http://arxiv.org/abs/2305.18022v2
- Date: Sat, 15 Jul 2023 14:55:28 GMT
- Title: HGT: A Hierarchical GCN-Based Transformer for Multimodal Periprosthetic
Joint Infection Diagnosis Using CT Images and Text
- Authors: Ruiyang Li, Fujun Yang, Xianjie Liu and Hongwei Shi
- Abstract summary: Prosthetic Joint Infection (PJI) is a prevalent and severe complication.
Currently, a unified diagnostic standard incorporating both computed tomography (CT) images and numerical text data for PJI remains unestablished.
This study introduces a diagnostic method, HGT, based on deep learning and multimodal techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prosthetic Joint Infection (PJI) is a prevalent and severe complication
characterized by high diagnostic challenges. Currently, a unified diagnostic
standard incorporating both computed tomography (CT) images and numerical text
data for PJI remains unestablished, owing to the substantial noise in CT images
and the disparity in data volume between CT images and text data. This study
introduces a diagnostic method, HGT, based on deep learning and multimodal
techniques. It effectively merges features from CT scan images and patients'
numerical text data via a Unidirectional Selective Attention (USA) mechanism
and a graph convolutional network (GCN)-based feature fusion network. We
evaluated the proposed method on a custom-built multimodal PJI dataset,
assessing its performance through ablation experiments and interpretability
evaluations. Our method achieved an accuracy (ACC) of 91.4\% and an area under
the curve (AUC) of 95.9\%, outperforming recent multimodal approaches by 2.9\%
in ACC and 2.2\% in AUC, with a parameter count of only 68M. Notably, the
interpretability results highlighted our model's strong focus and localization
capabilities at lesion sites. This proposed method could provide clinicians
with additional diagnostic tools to enhance accuracy and efficiency in clinical
practice.
Related papers
- Bridging the Diagnostic Divide: Classical Computer Vision and Advanced AI methods for distinguishing ITB and CD through CTE Scans [2.900410045439515]
A consensus among radiologists has recognized the visceral-to-subcutaneous fat ratio as a surrogate biomarker for differentiating between ITB and CD.
We propose a novel 2D image computer vision algorithm for auto-segmenting subcutaneous fat to automate this ratio calculation.
We trained a ResNet10 model on a dataset of CTE scans with samples from ITB, CD, and normal patients, achieving an accuracy of 75%.
arXiv Detail & Related papers (2024-10-23T17:05:27Z) - Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - CC-DCNet: Dynamic Convolutional Neural Network with Contrastive Constraints for Identifying Lung Cancer Subtypes on Multi-modality Images [13.655407979403945]
We propose a novel deep learning network designed to accurately classify lung cancer subtype with multi-dimensional and multi-modality images.
The strength of the proposed model lies in its ability to dynamically process both paired CT-pathological image sets and independent CT image sets.
We also develop a contrastive constraint module, which quantitatively maps the cross-modality associations through network training.
arXiv Detail & Related papers (2024-07-18T01:42:00Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - A Transformer-based representation-learning model with unified
processing of multimodal input for clinical diagnostics [63.106382317917344]
We report a Transformer-based representation-learning model as a clinical diagnostic aid that processes multimodal input in a unified manner.
The unified model outperformed an image-only model and non-unified multimodal diagnosis models in the identification of pulmonary diseases.
arXiv Detail & Related papers (2023-06-01T16:23:47Z) - Enhancing COVID-19 Severity Analysis through Ensemble Methods [13.792760290422185]
This paper presents a domain knowledge-based pipeline for extracting regions of infection in COVID-19 patients.
The severity of the infection is then classified into different categories using an ensemble of three machine-learning models.
The proposed system was evaluated on a validation dataset in the AI-Enabled Medical Image Analysis Workshop and COVID-19 Diagnosis Competition.
arXiv Detail & Related papers (2023-03-13T13:59:47Z) - Mediastinal Lymph Node Detection and Segmentation Using Deep Learning [1.7188280334580195]
In clinical practice, computed tomography (CT) and positron emission tomography (PET) imaging detect abnormal lymph nodes (LNs)
Deep convolutional neural networks frequently segment items in medical photographs.
A well-established deep learning technique UNet was modified using bilinear and total generalized variation (TGV) based up strategy to segment and detect mediastinal lymph nodes.
The modified UNet maintains texture discontinuities, selects noisy areas, searches appropriate balance points through backpropagation, and recreates image resolution.
arXiv Detail & Related papers (2022-11-24T02:55:20Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.