Robust Pancreatic Ductal Adenocarcinoma Segmentation with
Multi-Institutional Multi-Phase Partially-Annotated CT Scans
- URL: http://arxiv.org/abs/2008.10652v1
- Date: Mon, 24 Aug 2020 18:50:30 GMT
- Title: Robust Pancreatic Ductal Adenocarcinoma Segmentation with
Multi-Institutional Multi-Phase Partially-Annotated CT Scans
- Authors: Ling Zhang, Yu Shi, Jiawen Yao, Yun Bian, Kai Cao, Dakai Jin, Jing
Xiao, Le Lu
- Abstract summary: Pancreatic ductal adenocarcinoma (PDAC) segmentation is one of the most challenging tumor segmentation tasks.
Based on a new self-learning framework, we propose to train the PDAC segmentation model using a much larger quantity of patients.
Experiment results show that our proposed method provides an absolute improvement of 6.3% Dice score over the strong baseline of nnUNet trained on annotated images.
- Score: 25.889684822655255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and automated tumor segmentation is highly desired since it has the
great potential to increase the efficiency and reproducibility of computing
more complete tumor measurements and imaging biomarkers, comparing to (often
partial) human measurements. This is probably the only viable means to enable
the large-scale clinical oncology patient studies that utilize medical imaging.
Deep learning approaches have shown robust segmentation performances for
certain types of tumors, e.g., brain tumors in MRI imaging, when a training
dataset with plenty of pixel-level fully-annotated tumor images is available.
However, more than often, we are facing the challenge that only (very) limited
annotations are feasible to acquire, especially for hard tumors. Pancreatic
ductal adenocarcinoma (PDAC) segmentation is one of the most challenging tumor
segmentation tasks, yet critically important for clinical needs. Previous work
on PDAC segmentation is limited to the moderate amounts of annotated patient
images (n<300) from venous or venous+arterial phase CT scans. Based on a new
self-learning framework, we propose to train the PDAC segmentation model using
a much larger quantity of patients (n~=1,000), with a mix of annotated and
un-annotated venous or multi-phase CT images. Pseudo annotations are generated
by combining two teacher models with different PDAC segmentation specialties on
unannotated images, and can be further refined by a teaching assistant model
that identifies associated vessels around the pancreas. A student model is
trained on both manual and pseudo annotated multi-phase images. Experiment
results show that our proposed method provides an absolute improvement of 6.3%
Dice score over the strong baseline of nnUNet trained on annotated images,
achieving the performance (Dice = 0.71) similar to the inter-observer
variability between radiologists.
Related papers
- Self-supervised 3D anatomy segmentation using self-distilled masked
image transformer (SMIT) [2.7298989068857487]
Self-supervised learning has demonstrated success in medical image segmentation using convolutional networks.
We show our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks.
arXiv Detail & Related papers (2022-05-20T17:55:14Z) - Metastatic Cancer Outcome Prediction with Injective Multiple Instance
Pooling [1.0965065178451103]
We process two public datasets to set up a benchmark cohort of 341 patient in total for studying outcome prediction of metastatic cancer.
We propose two injective multiple instance pooling functions that are better suited to outcome prediction.
Our results show that multiple instance learning with injective pooling functions can achieve state-of-the-art performance in the non-small-cell lung cancer CT and head and neck CT outcome prediction benchmarking tasks.
arXiv Detail & Related papers (2022-03-09T16:58:03Z) - Deep Learning models for benign and malign Ocular Tumor Growth
Estimation [3.1558405181807574]
Clinicians often face issues in selecting suitable image processing algorithm for medical imaging data.
A strategy for the selection of a proper model is presented here.
arXiv Detail & Related papers (2021-07-09T05:40:25Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image
Segmentation [0.0]
We propose a novel deep neural network architecture, namely Enhanced Small Tumor-Aware Network (ESTAN) to accurately segment breast tumors.
ESTAN introduces two encoders to extract and fuse image context information at different scales and utilizes row-column-wise kernels in the encoder to adapt to breast anatomy.
arXiv Detail & Related papers (2020-09-27T16:42:59Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - Synergistic Learning of Lung Lobe Segmentation and Hierarchical
Multi-Instance Classification for Automated Severity Assessment of COVID-19
in CT Images [61.862364277007934]
We propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images.
A multi-task deep network (called M$2$UNet) is then developed to assess the severity of COVID-19 patients.
Our M$2$UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment.
arXiv Detail & Related papers (2020-05-08T03:16:15Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.