Automated 3D Tumor Segmentation using Temporal Cubic PatchGAN (TCuP-GAN)
- URL: http://arxiv.org/abs/2311.14148v1
- Date: Thu, 23 Nov 2023 18:37:26 GMT
- Title: Automated 3D Tumor Segmentation using Temporal Cubic PatchGAN (TCuP-GAN)
- Authors: Kameswara Bharadwaj Mantha, Ramanakumar Sankar, Lucy Fortson
- Abstract summary: Temporal Cubic PatchGAN (TCuP-GAN) is a volume-to-volume translational model that marries the concepts of a generative feature learning framework with Convolutional Long Short-Term Memory Networks (LSTMs)
We demonstrate the capabilities of our TCuP-GAN on the data from four segmentation challenges (Adult Glioma, Meningioma, Pediatric Tumors, and Sub-Saharan Africa)
We demonstrate the successful learning of our framework to predict robust multi-class segmentation masks across all the challenges.
- Score: 0.276240219662896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of robust general purpose 3D segmentation frameworks using the
latest deep learning techniques is one of the active topics in various
bio-medical domains. In this work, we introduce Temporal Cubic PatchGAN
(TCuP-GAN), a volume-to-volume translational model that marries the concepts of
a generative feature learning framework with Convolutional Long Short-Term
Memory Networks (LSTMs), for the task of 3D segmentation. We demonstrate the
capabilities of our TCuP-GAN on the data from four segmentation challenges
(Adult Glioma, Meningioma, Pediatric Tumors, and Sub-Saharan Africa subset)
featured within the 2023 Brain Tumor Segmentation (BraTS) Challenge and
quantify its performance using LesionWise Dice similarity and $95\%$ Hausdorff
Distance metrics. We demonstrate the successful learning of our framework to
predict robust multi-class segmentation masks across all the challenges. This
benchmarking work serves as a stepping stone for future efforts towards
applying TCuP-GAN on other multi-class tasks such as multi-organelle
segmentation in electron microscopy imaging.
Related papers
- Intensity-Spatial Dual Masked Autoencoder for Multi-Scale Feature Learning in Chest CT Segmentation [4.916334618361524]
This paper proposes an improved method named Intensity-Spatial Dual Masked AutoEncoder (ISD-MAE)
The model utilizes a dual-branch structure and contrastive learning to enhance the ability to learn tissue features and boundary details.
The results show that ISD-MAE significantly outperforms other methods in 2D pneumonia and mediastinal tumor segmentation tasks.
arXiv Detail & Related papers (2024-11-20T10:58:47Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - BHSD: A 3D Multi-Class Brain Hemorrhage Segmentation Dataset [24.094836682245006]
Intracranial hemorrhage (ICH) is a pathological condition characterized by bleeding inside the skull or brain.
Deep learning techniques are widely used in medical image segmentation and have been applied to the ICH segmentation task.
Existing public ICH datasets do not support the multi-class segmentation problem.
arXiv Detail & Related papers (2023-08-22T09:20:55Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - 3D Segmentation Networks for Excessive Numbers of Classes: Distinct Bone
Segmentation in Upper Bodies [1.2023648183416153]
This paper discusses the intricacies of training a 3D segmentation network in a many-label setting.
We show necessary modifications in network architecture, loss function, and data augmentation.
As a result, we demonstrate the robustness of our method by automatically segmenting over one hundred distinct bones simultaneously in an end-to-end learnt fashion from a CT-scan.
arXiv Detail & Related papers (2020-10-14T12:54:15Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z) - Robust Semantic Segmentation of Brain Tumor Regions from 3D MRIs [2.4736005621421686]
Multimodal brain tumor segmentation challenge (BraTS) brings together researchers to improve automated methods for 3D MRI brain tumor segmentation.
We evaluate the method on BraTS 2019 challenge.
arXiv Detail & Related papers (2020-01-06T07:47:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.