ACEnet: Anatomical Context-Encoding Network for Neuroanatomy
Segmentation
- URL: http://arxiv.org/abs/2002.05773v3
- Date: Sat, 2 Jan 2021 22:30:59 GMT
- Title: ACEnet: Anatomical Context-Encoding Network for Neuroanatomy
Segmentation
- Authors: Yuemeng Li, Hongming Li, Yong Fan
- Abstract summary: 2D deep learning methods are favored for their computational efficiency.
Existing 2D deep learning methods are not equipped to effectively capture 3D spatial contextual information.
We develop an Anatomical Context- Network (ACEnet) to incorporate 3D spatial and anatomical contexts in 2D convolutional neural networks (CNNs)
Our method achieves promising performance compared with state-of-the-art alternative methods for brain structure segmentation.
- Score: 1.7080853582489066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of brain structures from magnetic resonance (MR) scans plays an
important role in the quantification of brain morphology. Since 3D deep
learning models suffer from high computational cost, 2D deep learning methods
are favored for their computational efficiency. However, existing 2D deep
learning methods are not equipped to effectively capture 3D spatial contextual
information that is needed to achieve accurate brain structure segmentation. In
order to overcome this limitation, we develop an Anatomical Context-Encoding
Network (ACEnet) to incorporate 3D spatial and anatomical contexts in 2D
convolutional neural networks (CNNs) for efficient and accurate segmentation of
brain structures from MR scans, consisting of 1) an anatomical context encoding
module to incorporate anatomical information in 2D CNNs and 2) a spatial
context encoding module to integrate 3D image information in 2D CNNs. In
addition, a skull stripping module is adopted to guide the 2D CNNs to attend to
the brain. Extensive experiments on three benchmark datasets have demonstrated
that our method achieves promising performance compared with state-of-the-art
alternative methods for brain structure segmentation in terms of both
computational efficiency and segmentation accuracy.
Related papers
- Contextual Embedding Learning to Enhance 2D Networks for Volumetric Image Segmentation [5.995633685952995]
2D convolutional neural networks (CNNs) can hardly exploit the spatial correlation of volumetric data.
We propose a contextual embedding learning approach to facilitate 2D CNNs capturing spatial information properly.
Our approach leverages the learned embedding and the slice-wisely neighboring matching as a soft cue to guide the network.
arXiv Detail & Related papers (2024-04-02T08:17:39Z) - ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic
Reconstruction [62.599588577671796]
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames.
Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality.
arXiv Detail & Related papers (2023-11-29T20:30:18Z) - Spatiotemporal Modeling Encounters 3D Medical Image Analysis:
Slice-Shift UNet with Multi-View Fusion [0.0]
We propose a new 2D-based model dubbed Slice SHift UNet which encodes three-dimensional features at 2D CNN's complexity.
More precisely multi-view features are collaboratively learned by performing 2D convolutions along the three planes of a volume.
The effectiveness of our approach is validated in Multi-Modality Abdominal Multi-Organ axis (AMOS) and Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) datasets.
arXiv Detail & Related papers (2023-07-24T14:53:23Z) - Self-supervised learning via inter-modal reconstruction and feature
projection networks for label-efficient 3D-to-2D segmentation [4.5206601127476445]
We propose a novel convolutional neural network (CNN) and self-supervised learning (SSL) method for label-efficient 3D-to-2D segmentation.
Results on different datasets demonstrate that the proposed CNN significantly improves the state of the art in scenarios with limited labeled data by up to 8% in Dice score.
arXiv Detail & Related papers (2023-07-06T14:16:25Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation [3.1657395760137406]
We propose a semi-supervised segmentation framework built upon a novel atlas-based pseudo-labeling method and a 3D deep convolutional neural network (DCNN) for 3D brain MRI segmentation.
The experimental results demonstrate the superiority of the proposed framework over the baseline method both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-05-19T14:42:49Z) - Multi-organ Segmentation Network with Adversarial Performance Validator [10.775440368500416]
This paper introduces an adversarial performance validation network into a 2D-to-3D segmentation framework.
The proposed network converts the 2D-coarse result to 3D high-quality segmentation masks in a coarse-to-fine manner, allowing joint optimization to improve segmentation accuracy.
Experiments on the NIH pancreas segmentation dataset demonstrate the proposed network achieves state-of-the-art accuracy on small organ segmentation and outperforms the previous best.
arXiv Detail & Related papers (2022-04-16T18:00:29Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.