A Practical Framework for ROI Detection in Medical Images -- a case
study for hip detection in anteroposterior pelvic radiographs
- URL: http://arxiv.org/abs/2103.01584v1
- Date: Tue, 2 Mar 2021 09:21:08 GMT
- Title: A Practical Framework for ROI Detection in Medical Images -- a case
study for hip detection in anteroposterior pelvic radiographs
- Authors: Feng-Yu Liu, Chih-Chi Chen, Shann-Ching Chen, Chien-Hung Liao
- Abstract summary: We proposed a practical framework of ROIs detection in medical images, with a case study for hip detection in anteroposterior (AP) pelvic radiographs.
We conducted a retrospective study which analyzed hip joints seen on 7,399 AP pelvic radiographs from three diverse sources.
Our method achieved average intersection over union (IoU)=0.8115, average confidence=0.9812, and average precision with threshold IoU=0.5 (AP50)=0.9901 in the independent test set.
- Score: 2.007676195550049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose Automated detection of region of interest (ROI) is a critical step
for many medical image applications such as heart ROIs detection in perfusion
MRI images, lung boundary detection in chest X-rays, and femoral head detection
in pelvic radiographs. Thus, we proposed a practical framework of ROIs
detection in medical images, with a case study for hip detection in
anteroposterior (AP) pelvic radiographs.
Materials and Methods: We conducted a retrospective study which analyzed hip
joints seen on 7,399 AP pelvic radiographs from three diverse sources,
including 4,290 high resolution radiographs from Chang Gung Memorial Hospital
Osteoarthritis, 3,008 low to medium resolution radiographs from Osteoarthritis
Initiative, and 101 heterogeneous radiographs from Google image search engine.
We presented a deep learning-based ROI detection framework utilizing
single-shot multi-box detector (SSD) with ResNet-101 backbone and customized
head structure based on the characteristics of the obtained datasets, whose
ground truths were labeled by non-medical annotators in a simple graphical
interface.
Results: Our method achieved average intersection over union (IoU)=0.8115,
average confidence=0.9812, and average precision with threshold IoU=0.5
(AP50)=0.9901 in the independent test set, suggesting that the detected hip
regions have appropriately covered main features of the hip joints.
Conclusion: The proposed approach featured on low-cost labeling, data-driven
model design, and heterogeneous data testing. We have demonstrated the
feasibility of training a robust hip region detector for AP pelvic radiographs.
This practical framework has a promising potential for a wide range of medical
image applications.
Related papers
- Preoperative Rotator Cuff Tear Prediction from Shoulder Radiographs using a Convolutional Block Attention Module-Integrated Neural Network [0.04590531202809992]
We test whether a plane shoulder radiograph can be used together with deep learning methods to identify patients with rotator cuff tears.
By integrating convolutional block attention modules into a deep neural network, our model demonstrates high accuracy in detecting patients with rotator cuff tears.
arXiv Detail & Related papers (2024-08-19T11:08:49Z) - Leveraging Foundation Models for Content-Based Medical Image Retrieval in Radiology [0.14631663747888957]
Content-based image retrieval has the potential to significantly improve diagnostic aid and medical research in radiology.
Current CBIR systems face limitations due to their specialization to certain pathologies, limiting their utility.
We propose using vision foundation models as powerful and versatile off-the-shelf feature extractors for content-based medical image retrieval.
arXiv Detail & Related papers (2024-03-11T10:06:45Z) - Large-scale Long-tailed Disease Diagnosis on Radiology Images [51.453990034460304]
RadDiag is a foundational model supporting 2D and 3D inputs across various modalities and anatomies.
Our dataset, RP3D-DiagDS, contains 40,936 cases with 195,010 scans covering 5,568 disorders.
arXiv Detail & Related papers (2023-12-26T18:20:48Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - TRUSTED: The Paired 3D Transabdominal Ultrasound and CT Human Data for
Kidney Segmentation and Registration Research [42.90853857929316]
Inter-modal image registration (IMIR) and image segmentation with abdominal Ultrasound (US) data has many important clinical applications.
We propose TRUSTED (the Tridimensional Ultra Sound TomodEnsitometrie dataset), comprising paired transabdominal 3DUS and CT kidney images from 48 human patients.
arXiv Detail & Related papers (2023-10-19T11:09:50Z) - Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification [42.75911994044675]
We present a novel approach for unpaired image-to-image translation of prostate MRIs and an uncertainty-aware training approach for classifying clinically significant PCa.
Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data.
Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work.
arXiv Detail & Related papers (2023-07-02T05:26:54Z) - BMD-GAN: Bone mineral density estimation using x-ray image decomposition
into projections of bone-segmented quantitative computed tomography using
hierarchical learning [1.8762753243053634]
We propose an approach using the QCT for training a generative adversarial network (GAN) and decomposing an x-ray image into a projection of bone-segmented QCT.
The evaluation of 200 patients with osteoarthritis using the proposed method demonstrated a Pearson correlation coefficient of 0.888 between the predicted and ground truth.
arXiv Detail & Related papers (2022-07-07T10:33:12Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Development of the algorithm for differentiating bone metastases and
trauma of the ribs in bone scintigraphy and demonstration of visual evidence
of the algorithm -- Using only anterior bone scan view of thorax [0.0]
There is no report of an AI model that determines the accumulation of ribs in bone metastases and trauma only using the anterior image of thorax of bone scintigraphy.
We developed an algorithm to classify and diagnose whether RI accumulation on the ribs is bone metastasis or trauma using only anterior bone scan view of thorax.
arXiv Detail & Related papers (2021-09-30T23:55:31Z) - Sensitivity and Specificity Evaluation of Deep Learning Models for
Detection of Pneumoperitoneum on Chest Radiographs [0.8437813529429724]
State-of-the-art deep learning models (ResNet101, InceptionV3, DenseNet161, and ResNeXt101) were trained on a subset of this dataset.
DenseNet161 model was able to accurately classify radiographs from different imaging systems.
arXiv Detail & Related papers (2020-10-17T21:41:53Z) - A Convolutional Approach to Vertebrae Detection and Labelling in Whole
Spine MRI [70.04389979779195]
We propose a novel convolutional method for the detection and identification of vertebrae in whole spine MRIs.
This involves using a learnt vector field to group detected vertebrae corners together into individual vertebral bodies.
We demonstrate the clinical applicability of this method, using it for automated scoliosis detection in both lumbar and whole spine MR scans.
arXiv Detail & Related papers (2020-07-06T09:37:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.