A Novel Hybrid Convolutional Neural Network for Accurate Organ
Segmentation in 3D Head and Neck CT Images
- URL: http://arxiv.org/abs/2109.12634v1
- Date: Sun, 26 Sep 2021 15:37:47 GMT
- Title: A Novel Hybrid Convolutional Neural Network for Accurate Organ
Segmentation in 3D Head and Neck CT Images
- Authors: Zijie Chen, Cheng Li, Junjun He, Jin Ye, Diping Song, Shanshan Wang,
Lixu Gu, and Yu Qiao
- Abstract summary: We propose a novel hybrid CNN that fuses 2D and 3D convolutions to combat the different spatial resolutions and extract effective edge and semantic features from 3D HaN CT images.
Experiments on the MICCAI 2015 challenge dataset demonstrate that OrganNet2.5D achieves promising performance compared to state-of-the-art methods.
- Score: 31.245109574064532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiation therapy (RT) is widely employed in the clinic for the treatment of
head and neck (HaN) cancers. An essential step of RT planning is the accurate
segmentation of various organs-at-risks (OARs) in HaN CT images. Nevertheless,
segmenting OARs manually is time-consuming, tedious, and error-prone
considering that typical HaN CT images contain tens to hundreds of slices.
Automated segmentation algorithms are urgently required. Recently,
convolutional neural networks (CNNs) have been extensively investigated on this
task. Particularly, 3D CNNs are frequently adopted to process 3D HaN CT images.
There are two issues with na\"ive 3D CNNs. First, the depth resolution of 3D CT
images is usually several times lower than the in-plane resolution. Direct
employment of 3D CNNs without distinguishing this difference can lead to the
extraction of distorted image features and influence the final segmentation
performance. Second, a severe class imbalance problem exists, and large organs
can be orders of times larger than small organs. It is difficult to
simultaneously achieve accurate segmentation for all the organs. To address
these issues, we propose a novel hybrid CNN that fuses 2D and 3D convolutions
to combat the different spatial resolutions and extract effective edge and
semantic features from 3D HaN CT images. To accommodate large and small organs,
our final model, named OrganNet2.5D, consists of only two instead of the
classic four downsampling operations, and hybrid dilated convolutions are
introduced to maintain the respective field. Experiments on the MICCAI 2015
challenge dataset demonstrate that OrganNet2.5D achieves promising performance
compared to state-of-the-art methods.
Related papers
- Spatiotemporal Modeling Encounters 3D Medical Image Analysis:
Slice-Shift UNet with Multi-View Fusion [0.0]
We propose a new 2D-based model dubbed Slice SHift UNet which encodes three-dimensional features at 2D CNN's complexity.
More precisely multi-view features are collaboratively learned by performing 2D convolutions along the three planes of a volume.
The effectiveness of our approach is validated in Multi-Modality Abdominal Multi-Organ axis (AMOS) and Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) datasets.
arXiv Detail & Related papers (2023-07-24T14:53:23Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Multi-Slice Dense-Sparse Learning for Efficient Liver and Tumor
Segmentation [4.150096314396549]
Deep convolutional neural network (DCNNs) has obtained tremendous success in 2D and 3D medical image segmentation.
We propose a novel dense-sparse training flow from a data perspective, in which, densely adjacent slices and sparsely adjacent slices are extracted as inputs for regularizing DCNNs.
We also design a 2.5D light-weight nnU-Net from a network perspective, in which, depthwise separable convolutions are adopted to improve the efficiency.
arXiv Detail & Related papers (2021-08-15T15:29:48Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Automatic Segmentation of Organs-at-Risk from Head-and-Neck CT using
Separable Convolutional Neural Network with Hard-Region-Weighted Loss [10.93840864507459]
Nasopharyngeal Carcinoma (NPC) is a leading form of Head-and-Neck (HAN) cancer in the Arctic, China, Southeast Asia, and the Middle East/North Africa.
Accurate segmentation of Organs-at-Risk (OAR) from Computed Tomography (CT) images with uncertainty information is critical for effective planning of radiation therapy for NPC treatment.
We propose a novel framework for accurate OAR segmentation with reliable uncertainty estimation.
arXiv Detail & Related papers (2021-02-03T06:31:38Z) - Spatial Context-Aware Self-Attention Model For Multi-Organ Segmentation [18.76436457395804]
Multi-organ segmentation is one of most successful applications of deep learning in medical image analysis.
Deep convolutional neural nets (CNNs) have shown great promise in achieving clinically applicable image segmentation performance on CT or MRI images.
We propose a new framework for combining 3D and 2D models, in which the segmentation is realized through high-resolution 2D convolutions.
arXiv Detail & Related papers (2020-12-16T21:39:53Z) - Learning Hybrid Representations for Automatic 3D Vessel Centerline
Extraction [57.74609918453932]
Automatic blood vessel extraction from 3D medical images is crucial for vascular disease diagnoses.
Existing methods may suffer from discontinuities of extracted vessels when segmenting such thin tubular structures from 3D images.
We argue that preserving the continuity of extracted vessels requires to take into account the global geometry.
We propose a hybrid representation learning approach to address this challenge.
arXiv Detail & Related papers (2020-12-14T05:22:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.