Combining CNN and Hybrid Active Contours for Head and Neck Tumor
Segmentation in CT and PET images
- URL: http://arxiv.org/abs/2012.14207v1
- Date: Mon, 28 Dec 2020 12:12:14 GMT
- Title: Combining CNN and Hybrid Active Contours for Head and Neck Tumor
Segmentation in CT and PET images
- Authors: Jun Ma, Xiaoping Yang
- Abstract summary: We propose an automatic segmentation method for head and neck tumors based on the combination of convolutional neural networks (CNNs) and hybrid active contours.
Our method ranked second place in the MICCAI 2020 HECKTOR challenge with average Dice Similarity Coefficient, precision, and recall of 0.752, 0.838, and 0.717, respectively.
- Score: 16.76087435628378
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic segmentation of head and neck tumors plays an important role in
radiomics analysis. In this short paper, we propose an automatic segmentation
method for head and neck tumors from PET and CT images based on the combination
of convolutional neural networks (CNNs) and hybrid active contours.
Specifically, we first introduce a multi-channel 3D U-Net to segment the tumor
with the concatenated PET and CT images. Then, we estimate the segmentation
uncertainty by model ensembles and define a segmentation quality score to
select the cases with high uncertainties. Finally, we develop a hybrid active
contour model to refine the high uncertainty cases. Our method ranked second
place in the MICCAI 2020 HECKTOR challenge with average Dice Similarity
Coefficient, precision, and recall of 0.752, 0.838, and 0.717, respectively.
Related papers
- Multi-Layer Feature Fusion with Cross-Channel Attention-Based U-Net for Kidney Tumor Segmentation [0.0]
U-Net based deep learning techniques are emerging as a promising approach for automated medical image segmentation.
We present an improved U-Net based model for end-to-end automated semantic segmentation of CT scan images to identify renal tumors.
arXiv Detail & Related papers (2024-10-20T19:02:41Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.586530244472655]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.
The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.
The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Self-calibrated convolution towards glioma segmentation [45.74830585715129]
We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
arXiv Detail & Related papers (2024-02-07T19:51:13Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Whole-body tumor segmentation of 18F -FDG PET/CT using a cascaded and
ensembled convolutional neural networks [2.735686397209314]
The goal of this study was to report the performance of a deep neural network designed to automatically segment regions suspected of cancer in whole-body 18F-FDG PET/CT images.
A cascaded approach was developed where a stacked ensemble of 3D UNET CNN processed the PET/CT images at a fixed 6mm resolution.
arXiv Detail & Related papers (2022-10-14T19:25:56Z) - Weaving Attention U-net: A Novel Hybrid CNN and Attention-based Method
for Organs-at-risk Segmentation in Head and Neck CT Images [11.403827695550111]
We develop a novel hybrid deep learning approach, combining convolutional neural networks (CNNs) and the self-attention mechanism.
We show that the proposed method generated contours that closely resemble the ground truth for ten organs-at-risk (OARs)
Our results of the new Weaving Attention U-net demonstrate superior or similar performance on the segmentation of head and neck CT images.
arXiv Detail & Related papers (2021-07-10T14:27:46Z) - Squeeze-and-Excitation Normalization for Automated Delineation of Head
and Neck Primary Tumors in Combined PET and CT Images [3.2694564664990753]
We contribute an automated approach for Head and Neck (H&N) primary tumor segmentation in combined positron emission tomography / computed tomography (PET/CT) images.
Our model was designed on the U-Net architecture with residual layers and supplemented with Squeeze-and-Excitation Normalization.
The method achieved competitive results in cross-validation (DSC 0.745, precision 0.760, recall 0.789) performed on different centers, as well as on the test set (DSC 0.759, precision 0.833, recall 0.740) that allowed us to win first prize in the HECKTOR challenge.
arXiv Detail & Related papers (2021-02-20T21:06:59Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Multimodal Spatial Attention Module for Targeting Multimodal PET-CT Lung
Tumor Segmentation [11.622615048002567]
Multimodal spatial attention module (MSAM) learns to emphasize regions related to tumors.
MSAM can be applied to common backbone architectures and trained end-to-end.
arXiv Detail & Related papers (2020-07-29T10:27:22Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.