A Dataset for Deep Learning-based Bone Structure Analyses in Total Hip
Arthroplasty
- URL: http://arxiv.org/abs/2306.04579v1
- Date: Wed, 7 Jun 2023 16:28:53 GMT
- Title: A Dataset for Deep Learning-based Bone Structure Analyses in Total Hip
Arthroplasty
- Authors: Kaidong Zhang, Ziyang Gan, Dong Liu, Xifu Shang
- Abstract summary: Total hip anatomy (THA) is a widely used surgical procedure in orthopedics.
Deep learning technologies are promising but require high-quality labeled data for the learning.
We propose an efficient data annotation pipeline for producing a deep learning-oriented dataset.
- Score: 8.604089365903029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Total hip arthroplasty (THA) is a widely used surgical procedure in
orthopedics. For THA, it is of clinical significance to analyze the bone
structure from the CT images, especially to observe the structure of the
acetabulum and femoral head, before the surgical procedure. For such bone
structure analyses, deep learning technologies are promising but require
high-quality labeled data for the learning, while the data labeling is costly.
We address this issue and propose an efficient data annotation pipeline for
producing a deep learning-oriented dataset. Our pipeline consists of
non-learning-based bone extraction (BE) and acetabulum and femoral head
segmentation (AFS) and active-learning-based annotation refinement (AAR). For
BE we use the classic graph-cut algorithm. For AFS we propose an improved
algorithm, including femoral head boundary localization using first-order and
second-order gradient regularization, line-based non-maximum suppression, and
anatomy prior-based femoral head extraction. For AAR, we refine the
algorithm-produced pseudo labels with the help of trained deep models: we
measure the uncertainty based on the disagreement between the original pseudo
labels and the deep model predictions, and then find out the samples with the
largest uncertainty to ask for manual labeling. Using the proposed pipeline, we
construct a large-scale bone structure analyses dataset from more than 300
clinical and diverse CT scans. We perform careful manual labeling for the test
set of our data. We then benchmark multiple state-of-the art deep
learning-based methods of medical image segmentation using the training and
test sets of our data. The extensive experimental results validate the efficacy
of the proposed data annotation pipeline. The dataset, related codes and models
will be publicly available at https://github.com/hitachinsk/THA.
Related papers
- Benefit from public unlabeled data: A Frangi filtering-based pretraining
network for 3D cerebrovascular segmentation [8.611575147737147]
We construct the largest preprocessed unlabeled TOF-MRA datasets to date.
We propose a simple yet effective pertraining strategy based on Frangi filtering.
The results have demonstrated the superior performance of our model, with an improvement of approximately 3%.
arXiv Detail & Related papers (2023-12-23T14:47:21Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Does Deep Learning REALLY Outperform Non-deep Machine Learning for
Clinical Prediction on Physiological Time Series? [11.901347806586234]
We systematically examine the performance of machine learning models for the clinical prediction task based on the EHR.
Ten baseline machine learning models are compared, including 3 deep learning methods and 7 non-deep learning methods.
The results show that deep learning indeed outperforms non-deep learning, but with certain conditions.
arXiv Detail & Related papers (2022-11-11T07:09:49Z) - RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical
Centerline Extraction [49.715490897822264]
We extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2.
Based on RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction.
arXiv Detail & Related papers (2022-10-18T00:55:37Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Curriculum learning for improved femur fracture classification:
scheduling data with prior knowledge and uncertainty [36.54112505898611]
We propose a method for the automatic classification of proximal femur fractures into 3 and 7 AO classes based on a Convolutional Neural Network (CNN)
Our novel formulation reunites three curriculum strategies: individually weighting training samples, reordering the training set, and sampling subsets of data.
The curriculum improves proximal femur fracture classification up to the performance of experienced trauma surgeons.
arXiv Detail & Related papers (2020-07-31T14:28:33Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z) - DeepEnroll: Patient-Trial Matching with Deep Embedding and Entailment
Prediction [67.91606509226132]
Clinical trials are essential for drug development but often suffer from expensive, inaccurate and insufficient patient recruitment.
DeepEnroll is a cross-modal inference learning model to jointly encode enrollment criteria (tabular data) into a shared latent space for matching inference.
arXiv Detail & Related papers (2020-01-22T17:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.