DeepLOC: Deep Learning-based Bone Pathology Localization and
Classification in Wrist X-ray Images
- URL: http://arxiv.org/abs/2308.12727v1
- Date: Thu, 24 Aug 2023 12:06:10 GMT
- Title: DeepLOC: Deep Learning-based Bone Pathology Localization and
Classification in Wrist X-ray Images
- Authors: Razan Dibo and Andrey Galichin and Pavel Astashev and Dmitry V. Dylov
and Oleg Y. Rogov
- Abstract summary: This paper presents a novel approach for bone pathology localization and classification in wrist X-ray images.
The proposed methodology addresses two critical challenges in wrist X-ray analysis: accurate localization of bone pathologies and precise classification of abnormalities.
- Score: 1.45543311565555
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, computer-aided diagnosis systems have shown great potential
in assisting radiologists with accurate and efficient medical image analysis.
This paper presents a novel approach for bone pathology localization and
classification in wrist X-ray images using a combination of YOLO (You Only Look
Once) and the Shifted Window Transformer (Swin) with a newly proposed block.
The proposed methodology addresses two critical challenges in wrist X-ray
analysis: accurate localization of bone pathologies and precise classification
of abnormalities. The YOLO framework is employed to detect and localize bone
pathologies, leveraging its real-time object detection capabilities.
Additionally, the Swin, a transformer-based module, is utilized to extract
contextual information from the localized regions of interest (ROIs) for
accurate classification.
Related papers
- A novel approach towards the classification of Bone Fracture from Musculoskeletal Radiography images using Attention Based Transfer Learning [0.0]
We deploy an attention-based transfer learning model to detect bone fractures in X-ray scans.
Our model achieves a state-of-the-art accuracy of more than 90% in fracture classification.
arXiv Detail & Related papers (2024-10-18T19:07:24Z) - Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization [4.714335699701277]
We propose a weakly supervised interpretable fundus disease localization method called hierarchical salient patch identification (HSPI)
HSPI can achieve interpretable disease localization using only image-level labels and a neural network classifier (NNC)
We conduct disease localization experiments on fundus image datasets and achieve the best performance on multiple evaluation metrics compared to previous interpretable attribution methods.
arXiv Detail & Related papers (2024-05-23T09:07:21Z) - Local Contrastive Learning for Medical Image Recognition [0.0]
Local Region Contrastive Learning (LRCLR) is a flexible fine-tuning framework that adds layers for significant image region selection and cross-modality interaction.
Our results on an external validation set of chest x-rays suggest that LRCLR identifies significant local image regions and provides meaningful interpretation against radiology text.
arXiv Detail & Related papers (2023-03-24T17:04:26Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.