propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors
on CT Scans
- URL: http://arxiv.org/abs/2305.17871v1
- Date: Mon, 29 May 2023 03:24:02 GMT
- Title: propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors
on CT Scans
- Authors: Zifan Chen, Jiazheng Li, Jie Zhao, Yiting Liu, Hongfeng Li, Bin Dong,
Lei Tang, Li Zhang
- Abstract summary: This study introduces a model, utilizing human-guided knowledge and unique modules, to address the challenges of 3D tumor segmentation.
With 98 patient scans for training and 30 for validation, our method achieves a significant agreement with manual annotation (Dice of 0.803) and improves efficiency.
- Score: 16.135854257728337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: **Background:** Accurate 3D CT scan segmentation of gastric tumors is pivotal
for diagnosis and treatment. The challenges lie in the irregular shapes,
blurred boundaries of tumors, and the inefficiency of existing methods.
**Purpose:** We conducted a study to introduce a model, utilizing
human-guided knowledge and unique modules, to address the challenges of 3D
tumor segmentation.
**Methods:** We developed the PropNet framework, propagating radiologists'
knowledge from 2D annotations to the entire 3D space. This model consists of a
proposing stage for coarse segmentation and a refining stage for improved
segmentation, using two-way branches for enhanced performance and an up-down
strategy for efficiency.
**Results:** With 98 patient scans for training and 30 for validation, our
method achieves a significant agreement with manual annotation (Dice of 0.803)
and improves efficiency. The performance is comparable in different scenarios
and with various radiologists' annotations (Dice between 0.785 and 0.803).
Moreover, the model shows improved prognostic prediction performance (C-index
of 0.620 vs. 0.576) on an independent validation set of 42 patients with
advanced gastric cancer.
**Conclusions:** Our model generates accurate tumor segmentation efficiently
and stably, improving prognostic performance and reducing high-throughput image
reading workload. This model can accelerate the quantitative analysis of
gastric tumors and enhance downstream task performance.
Related papers
- MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts [54.915060471994686]
We propose MAST-Pro, a novel framework that integrates dynamic Mixture-of-Experts (D-MoE) and knowledge-driven prompts for pan-tumor segmentation.
Specifically, text and anatomical prompts provide domain-specific priors guiding tumor representation learning, while D-MoE dynamically selects experts to balance generic and tumor-specific feature learning.
Experiments on multi-anatomical tumor datasets demonstrate that MAST-Pro outperforms state-of-the-art approaches, achieving up to a 5.20% improvement in average improvement while reducing trainable parameters by 91.04%, without compromising accuracy.
arXiv Detail & Related papers (2025-03-18T15:39:44Z) - Enhancing Brain Tumor Segmentation Using Channel Attention and Transfer learning [5.062500255359342]
We present an enhanced ResUNet architecture for automatic brain tumor segmentation.
The EfficientNetB0 encoder leverages pre-trained features to improve feature extraction efficiency.
The channel attention mechanism enhances the model's focus on tumor-relevant features.
arXiv Detail & Related papers (2025-01-19T23:58:16Z) - Lumbar Spine Tumor Segmentation and Localization in T2 MRI Images Using AI [2.9746083684997418]
This study introduces a novel data augmentation technique, aimed at automating spine tumor segmentation and localization through AI approaches.
A Convolutional Neural Network (CNN) architecture is employed for tumor classification. 3D vertebral segmentation and labeling techniques are used to help pinpoint the exact location of the tumors in the lumbar spine.
Results indicate a remarkable performance, with 99% accuracy for tumor segmentation, 98% accuracy for tumor classification, and 99% accuracy for tumor localization achieved with the proposed approach.
arXiv Detail & Related papers (2024-05-07T05:55:50Z) - Re-DiffiNet: Modeling discrepancies in tumor segmentation using diffusion models [1.7995110894203483]
We introduce a framework called Re-Diffinet for modeling the discrepancy between the outputs of a segmentation model like U-Net and the ground truth.
The results show an average improvement of 0.55% in the Dice score and 16.28% in HD95 from cross-validation over 5-folds.
arXiv Detail & Related papers (2024-02-12T01:03:39Z) - An Optimization Framework for Processing and Transfer Learning for the
Brain Tumor Segmentation [2.0886519175557368]
We have constructed an optimization framework based on a 3D U-Net model for brain tumor segmentation.
This framework incorporates a range of techniques, including various pre-processing and post-processing techniques, and transfer learning.
On the validation datasets, this multi-modality brain tumor segmentation framework achieves an average lesion-wise Dice score of 0.79, 0.72, 0.74 on Challenges 1, 2, 3 respectively.
arXiv Detail & Related papers (2024-02-10T18:03:15Z) - Automated ensemble method for pediatric brain tumor segmentation [0.0]
This study introduces a novel ensemble approach using ONet and modified versions of UNet.
Data augmentation ensures robustness and accuracy across different scanning protocols.
Results indicate that this advanced ensemble approach offers promising prospects for enhanced diagnostic accuracy.
arXiv Detail & Related papers (2023-08-14T15:29:32Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Evaluating the Effectiveness of 2D and 3D Features for Predicting Tumor
Response to Chemotherapy [0.9709939410473847]
2D and 3D tumor features are widely used in a variety of medical image analysis tasks.
For chemotherapy response prediction, the effectiveness between different kinds of 2D and 3D features are not comprehensively assessed.
arXiv Detail & Related papers (2023-03-28T16:44:43Z) - Validated respiratory drug deposition predictions from 2D and 3D medical
images with statistical shape models and convolutional neural networks [47.187609203210705]
We aim to develop and validate an automated computational framework for patient-specific deposition modelling.
An image processing approach is proposed that could produce 3D patient respiratory geometries from 2D chest X-rays and 3D CT images.
arXiv Detail & Related papers (2023-03-02T07:47:07Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.