CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation
- URL: http://arxiv.org/abs/2205.09601v1
- Date: Thu, 19 May 2022 14:42:49 GMT
- Title: CORPS: Cost-free Rigorous Pseudo-labeling based on Similarity-ranking
for Brain MRI Segmentation
- Authors: Can Taylan Sari, Sila Kurugol, Onur Afacan, Simon K. Warfield
- Abstract summary: We propose a semi-supervised segmentation framework built upon a novel atlas-based pseudo-labeling method and a 3D deep convolutional neural network (DCNN) for 3D brain MRI segmentation.
The experimental results demonstrate the superiority of the proposed framework over the baseline method both qualitatively and quantitatively.
- Score: 3.1657395760137406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Segmentation of brain magnetic resonance images (MRI) is crucial for the
analysis of the human brain and diagnosis of various brain disorders. The
drawbacks of time-consuming and error-prone manual delineation procedures are
aimed to be alleviated by atlas-based and supervised machine learning methods
where the former methods are computationally intense and the latter methods
lack a sufficiently large number of labeled data. With this motivation, we
propose CORPS, a semi-supervised segmentation framework built upon a novel
atlas-based pseudo-labeling method and a 3D deep convolutional neural network
(DCNN) for 3D brain MRI segmentation. In this work, we propose to generate
expert-level pseudo-labels for unlabeled set of images in an order based on a
local intensity-based similarity score to existing labeled set of images and
using a novel atlas-based label fusion method. Then, we propose to train a 3D
DCNN on the combination of expert and pseudo labeled images for binary
segmentation of each anatomical structure. The binary segmentation approach is
proposed to avoid the poor performance of multi-class segmentation methods on
limited and imbalanced data. This also allows to employ a lightweight and
efficient 3D DCNN in terms of the number of filters and reserve memory
resources for training the binary networks on full-scale and full-resolution 3D
MRI volumes instead of 2D/3D patches or 2D slices. Thus, the proposed framework
can encapsulate the spatial contiguity in each dimension and enhance
context-awareness. The experimental results demonstrate the superiority of the
proposed framework over the baseline method both qualitatively and
quantitatively without additional labeling cost for manual labeling.
Related papers
- Label-Efficient 3D Brain Segmentation via Complementary 2D Diffusion Models with Orthogonal Views [10.944692719150071]
We propose a novel 3D brain segmentation approach using complementary 2D diffusion models.
Our goal is to achieve reliable segmentation quality without requiring complete labels for each individual subject.
arXiv Detail & Related papers (2024-07-17T06:14:53Z) - Decoupled Pseudo-labeling for Semi-Supervised Monocular 3D Object Detection [108.672972439282]
We introduce a novel decoupled pseudo-labeling (DPL) approach for SSM3OD.
Our approach features a Decoupled Pseudo-label Generation (DPG) module, designed to efficiently generate pseudo-labels.
We also present a DepthGradient Projection (DGP) module to mitigate optimization conflicts caused by noisy depth supervision of pseudo-labels.
arXiv Detail & Related papers (2024-03-26T05:12:18Z) - Med-DANet: Dynamic Architecture Network for Efficient Medical Volumetric
Segmentation [13.158995287578316]
We propose a dynamic architecture network named Med-DANet to achieve effective accuracy and efficiency trade-off.
For each slice of the input 3D MRI volume, our proposed method learns a slice-specific decision by the Decision Network.
Our proposed method achieves comparable or better results than previous state-of-the-art methods for 3D MRI brain tumor segmentation.
arXiv Detail & Related papers (2022-06-14T03:25:58Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Semi-Supervised Hybrid Spine Network for Segmentation of Spine MR Images [14.190504802866288]
We propose a two-stage algorithm, named semi-supervised hybrid spine network (SSHSNet) to achieve simultaneous vertebral bodies (VBs) and intervertebral discs (IVDs) segmentation.
In the first stage, we constructed a 2D semi-supervised DeepLabv3+ by using cross pseudo supervision to obtain intra-slice features and coarse segmentation.
In the second stage, a 3D full-resolution patch-based DeepLabv3+ was built to extract inter-slice information.
Results show that the proposed method has great potential in dealing with the data imbalance problem
arXiv Detail & Related papers (2022-03-23T02:57:14Z) - 3-Dimensional Deep Learning with Spatial Erasing for Unsupervised
Anomaly Segmentation in Brain MRI [55.97060983868787]
We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance.
We compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance.
Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE.
arXiv Detail & Related papers (2021-09-14T09:17:27Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma
Segmentation in MRI Scans [22.60715394470069]
We devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model.
In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch.
arXiv Detail & Related papers (2020-10-20T20:42:52Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Automated Segmentation of Brain Gray Matter Nuclei on Quantitative
Susceptibility Mapping Using Deep Convolutional Neural Network [16.733578721523898]
Abnormal iron accumulation in the brain subcortical nuclei has been reported to be correlated to various neurodegenerative diseases.
We propose a double-branch residual-structured U-Net (DB-ResUNet) based on 3D convolutional neural network (CNN) to automatically segment such brain gray matter nuclei.
arXiv Detail & Related papers (2020-08-03T14:32:30Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.