CBCTLiTS: A Synthetic, Paired CBCT/CT Dataset For Segmentation And Style Transfer
- URL: http://arxiv.org/abs/2407.14853v1
- Date: Sat, 20 Jul 2024 11:47:20 GMT
- Title: CBCTLiTS: A Synthetic, Paired CBCT/CT Dataset For Segmentation And Style Transfer
- Authors: Maximilian E. Tschuchnig, Philipp Steininger, Michael Gadermayr,
- Abstract summary: We present CBCTLiTS, a synthetically generated, labelled CBCT dataset for segmentation with paired and aligned, high quality computed tomography data.
The CBCT data is provided in 5 different levels of quality, reaching from a large number of projections with high visual quality to a small number of projections with severe artifacts.
- Score: 0.21847754147782888
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Medical imaging is vital in computer assisted intervention. Particularly cone beam computed tomography (CBCT) with defacto real time and mobility capabilities plays an important role. However, CBCT images often suffer from artifacts, which pose challenges for accurate interpretation, motivating research in advanced algorithms for more effective use in clinical practice. In this work we present CBCTLiTS, a synthetically generated, labelled CBCT dataset for segmentation with paired and aligned, high quality computed tomography data. The CBCT data is provided in 5 different levels of quality, reaching from a large number of projections with high visual quality and mild artifacts to a small number of projections with severe artifacts. This allows thorough investigations with the quality as a degree of freedom. We also provide baselines for several possible research scenarios like uni- and multimodal segmentation, multitask learning and style transfer followed by segmentation of relatively simple, liver to complex liver tumor segmentation. CBCTLiTS is accesssible via https://www.kaggle.com/datasets/maximiliantschuchnig/cbct-liver-and-liver-tumor-segmentation-train-d ata.
Related papers
- Med-TTT: Vision Test-Time Training model for Medical Image Segmentation [5.318153305245246]
We propose Med-TTT, a visual backbone network integrated with Test-Time Training layers.
The model achieves leading performance in terms of accuracy, sensitivity, and Dice coefficient.
arXiv Detail & Related papers (2024-10-03T14:29:46Z) - SinoSynth: A Physics-based Domain Randomization Approach for Generalizable CBCT Image Enhancement [19.059201978992064]
Cone Beam Computed Tomography (CBCT) finds diverse applications in medicine.
The susceptibility of CBCT images to noise and artifacts undermines both their usefulness and reliability.
We present Sino Synth, a physics-based degradation model that simulates various CBCT-specific artifacts to generate a diverse set of synthetic CBCT images.
arXiv Detail & Related papers (2024-09-27T00:22:02Z) - Multimodal Learning With Intraoperative CBCT & Variably Aligned Preoperative CT Data To Improve Segmentation [0.21847754147782888]
Cone-beam computed tomography (CBCT) is an important tool facilitating computer aided interventions.
While the degraded image quality can affect downstream segmentation, the availability of high quality, preoperative scans represents potential for improvements.
We propose a multimodal learning method that fuses roughly aligned CBCT and CT scans and investigate the effect of CBCT quality and misalignment on the final segmentation performance.
arXiv Detail & Related papers (2024-06-17T15:31:54Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Liver Segmentation using Turbolift Learning for CT and Cone-beam C-arm
Perfusion Imaging [0.4073222202612759]
Time separation technique (TST) was found to improve dynamic perfusion imaging of the liver using C-arm cone-beam computed tomography (CBCT)
To apply TST using prior knowledge extracted from CT perfusion data, the liver should be accurately segmented from the CT scans.
This research proposes Turbolift learning, which trains a modified version of the multi-scale Attention UNet on different liver segmentation tasks.
arXiv Detail & Related papers (2022-07-20T19:38:50Z) - FetReg: Placental Vessel Segmentation and Registration in Fetoscopy
Challenge Dataset [57.30136148318641]
Fetoscopy laser photocoagulation is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS)
This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS.
Computer-assisted intervention may help overcome these challenges by expanding the fetoscopic field of view through video mosaicking and providing better visualization of the vessel network.
We present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos.
arXiv Detail & Related papers (2021-06-10T17:14:27Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - A$^3$DSegNet: Anatomy-aware artifact disentanglement and segmentation
network for unpaired segmentation, artifact reduction, and modality
translation [18.500206499468902]
CBCT images are of low-quality and artifact-laden due to noise, poor tissue contrast, and the presence of metallic objects.
There exists a wealth of artifact-free, high quality CT images with vertebra annotations.
This motivates us to build a CBCT vertebra segmentation model using unpaired CT images with annotations.
arXiv Detail & Related papers (2020-01-02T06:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.