Synthetic vascular structure generation for unsupervised pre-training in
CTA segmentation tasks
- URL: http://arxiv.org/abs/2001.00666v1
- Date: Thu, 2 Jan 2020 23:21:22 GMT
- Title: Synthetic vascular structure generation for unsupervised pre-training in
CTA segmentation tasks
- Authors: Nil Stolt Ans\'o
- Abstract summary: We train a U-net architecture at a vessel segmentation task that can be used to provide insights when treating stroke patients.
We create a computational model that generates synthetic vascular structures which can be blended into unlabeled CT scans of the head.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large enough computed tomography (CT) data sets to train supervised deep
models are often hard to come by. One contributing issue is the amount of
manual labor that goes into creating ground truth labels, specially for
volumetric data. In this research, we train a U-net architecture at a vessel
segmentation task that can be used to provide insights when treating stroke
patients. We create a computational model that generates synthetic vascular
structures which can be blended into unlabeled CT scans of the head. This
unsupervised approached to labelling is used to pre-train deep segmentation
models, which are later fine-tuned on real examples to achieve an increase in
accuracy compared to models trained exclusively on a hand-labeled data set.
Related papers
- Enhanced segmentation of femoral bone metastasis in CT scans of patients using synthetic data generation with 3D diffusion models [0.06700983301090582]
We propose an automated data pipeline using 3D Denoising Diffusion Probabilistic Models (DDPM) to generalize on new images.
We created 5675 new volumes, then trained 3D U-Net segmentation models on real and synthetic data to compare segmentation performance.
arXiv Detail & Related papers (2024-09-17T09:21:19Z) - Federated Foundation Model for Cardiac CT Imaging [25.98149779380328]
We conduct the largest federated cardiac CT imaging analysis to date, focusing on partially labeled datasets.
We develop a two-stage semi-supervised learning strategy that distills knowledge from several task-specific CNNs into a single transformer model.
arXiv Detail & Related papers (2024-07-10T11:30:50Z) - Few-Shot Airway-Tree Modeling using Data-Driven Sparse Priors [0.0]
Few-shot learning approaches are cost-effective to transfer pre-trained models using only limited annotated data.
We train a data-driven sparsification module to enhance airways efficiently in lung CT scans.
We then incorporate these sparse representations in a standard supervised segmentation pipeline as a pretraining step to enhance the performance of the DL models.
arXiv Detail & Related papers (2024-07-05T13:46:11Z) - A label-free and data-free training strategy for vasculature segmentation in serial sectioning OCT data [4.746694624239095]
Serial sectioning Optical Coherence Tomography (s OCT) is becoming increasingly popular to study post-mortem neurovasculature.
Here, we leverage synthetic datasets of vessels to train a deep learning segmentation model.
Both approaches yield similar Dice scores, although with very different false positive and false negative rates.
arXiv Detail & Related papers (2024-05-22T15:39:31Z) - Sparse Anatomical Prompt Semi-Supervised Learning with Masked Image
Modeling for CBCT Tooth Segmentation [10.617296334463942]
tooth identification and segmentation in Cone Beam Computed Tomography (CBCT) dental images can significantly enhance the efficiency and precision of manual diagnoses performed by dentists.
Existing segmentation methods are mainly developed based on large data volumes training, on which their annotations are extremely time-consuming.
This study proposes a tasked-oriented Masked Auto-Encoder paradigm to effectively utilize large amounts of unlabeled data to achieve accurate tooth segmentation with limited labeled data.
arXiv Detail & Related papers (2024-02-07T05:05:21Z) - Towards Unifying Anatomy Segmentation: Automated Generation of a
Full-body CT Dataset via Knowledge Aggregation and Anatomical Guidelines [113.08940153125616]
We generate a dataset of whole-body CT scans with $142$ voxel-level labels for 533 volumes providing comprehensive anatomical coverage.
Our proposed procedure does not rely on manual annotation during the label aggregation stage.
We release our trained unified anatomical segmentation model capable of predicting $142$ anatomical structures on CT data.
arXiv Detail & Related papers (2023-07-25T09:48:13Z) - Automated Labeling of German Chest X-Ray Radiology Reports using Deep
Learning [50.591267188664666]
We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model.
Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks.
arXiv Detail & Related papers (2023-06-09T16:08:35Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Improving Semantic Segmentation via Self-Training [75.07114899941095]
We show that we can obtain state-of-the-art results using a semi-supervised approach, specifically a self-training paradigm.
We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets.
arXiv Detail & Related papers (2020-04-30T17:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.