Automatic Segmentation of Gross Target Volume of Nasopharynx Cancer
using Ensemble of Multiscale Deep Neural Networks with Spatial Attention
- URL: http://arxiv.org/abs/2101.11254v1
- Date: Wed, 27 Jan 2021 08:20:49 GMT
- Title: Automatic Segmentation of Gross Target Volume of Nasopharynx Cancer
using Ensemble of Multiscale Deep Neural Networks with Spatial Attention
- Authors: Haochen Mei, Wenhui Lei, Ran Gu, Shan Ye, Zhengwentai Sun, Shichuan
Zhang, Guotai Wang
- Abstract summary: We propose a 2.5D Convolutional Neural Network (CNN) to handle the difference of inplane and through-plane resolution.
We also propose a spatial attention module to enable the network to focus on small target, and use channel attention to further improve the segmentation performance.
- Score: 2.204996105506197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiotherapy is the main treatment modality for nasopharynx cancer.
Delineation of Gross Target Volume (GTV) from medical images such as CT and MRI
images is a prerequisite for radiotherapy. As manual delineation is
time-consuming and laborious, automatic segmentation of GTV has a potential to
improve this process. Currently, most of the deep learning-based automatic
delineation methods of GTV are mainly performed on medical images like CT
images. However, it is challenged by the low contrast between the pathology
regions and surrounding soft tissues, small target region, and anisotropic
resolution of clinical CT images. To deal with these problems, we propose a
2.5D Convolutional Neural Network (CNN) to handle the difference of inplane and
through-plane resolution. Furthermore, we propose a spatial attention module to
enable the network to focus on small target, and use channel attention to
further improve the segmentation performance. Moreover, we use multi-scale
sampling method for training so that the networks can learn features at
different scales, which are combined with a multi-model ensemble method to
improve the robustness of segmentation results. We also estimate the
uncertainty of segmentation results based on our model ensemble, which is of
great importance for indicating the reliability of automatic segmentation
results for radiotherapy planning.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Applying Conditional Generative Adversarial Networks for Imaging Diagnosis [3.881664394416534]
This study introduces an innovative application of Conditional Generative Adversarial Networks (C-GAN) integrated with Stacked Hourglass Networks (SHGN)
We address the problem of overfitting, common in deep learning models applied to complex imaging datasets, by augmenting data through rotation and scaling.
A hybrid loss function combining L1 and L2 reconstruction losses, enriched with adversarial training, is introduced to refine segmentation processes in intravascular ultrasound (IVUS) imaging.
arXiv Detail & Related papers (2024-07-17T23:23:09Z) - CGAM: Click-Guided Attention Module for Interactive Pathology Image
Segmentation via Backpropagating Refinement [8.590026259176806]
Tumor region segmentation is an essential task for the quantitative analysis of digital pathology.
Recent deep neural networks have shown state-of-the-art performance in various image-segmentation tasks.
We propose an interactive segmentation method that allows users to refine the output of deep neural networks through click-type user interactions.
arXiv Detail & Related papers (2023-07-03T13:45:24Z) - Scale-aware Super-resolution Network with Dual Affinity Learning for
Lesion Segmentation from Medical Images [50.76668288066681]
We present a scale-aware super-resolution network to adaptively segment lesions of various sizes from low-resolution medical images.
Our proposed network achieved consistent improvements compared to other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-30T14:25:55Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - RCA-IUnet: A residual cross-spatial attention guided inception U-Net
model for tumor segmentation in breast ultrasound imaging [0.6091702876917281]
The article introduces an efficient residual cross-spatial attention guided inception U-Net (RCA-IUnet) model with minimal training parameters for tumor segmentation.
The RCA-IUnet model follows U-Net topology with residual inception depth-wise separable convolution and hybrid pooling layers.
Cross-spatial attention filters are added to suppress the irrelevant features and focus on the target structure.
arXiv Detail & Related papers (2021-08-05T10:35:06Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Boosted EfficientNet: Detection of Lymph Node Metastases in Breast
Cancer Using Convolutional Neural Network [6.444922476853511]
The Convolutional Neutral Network (CNN) has been adapted to predict and classify lymph node metastasis in breast cancer.
We propose a novel data augmentation method named Random Center Cropping (RCC) to facilitate small resolution images.
arXiv Detail & Related papers (2020-10-10T15:18:56Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.