Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning
- URL: http://arxiv.org/abs/2403.02566v1
- Date: Tue, 5 Mar 2024 00:46:53 GMT
- Title: Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning
- Authors: Zhaoxin Fan, Runmin Jiang, Junhao Wu, Xin Huang, Tianyang Wang, Heng
Huang, Min Xu
- Abstract summary: 3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
- Score: 52.249748801637196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D medical image segmentation is a challenging task with crucial implications
for disease diagnosis and treatment planning. Recent advances in deep learning
have significantly enhanced fully supervised medical image segmentation.
However, this approach heavily relies on labor-intensive and time-consuming
fully annotated ground-truth labels, particularly for 3D volumes. To overcome
this limitation, we propose a novel probabilistic-aware weakly supervised
learning pipeline, specifically designed for 3D medical imaging. Our pipeline
integrates three innovative components: a probability-based pseudo-label
generation technique for synthesizing dense segmentation masks from sparse
annotations, a Probabilistic Multi-head Self-Attention network for robust
feature extraction within our Probabilistic Transformer Network, and a
Probability-informed Segmentation Loss Function to enhance training with
annotation confidence. Demonstrating significant advances, our approach not
only rivals the performance of fully supervised methods but also surpasses
existing weakly supervised methods in CT and MRI datasets, achieving up to
18.1% improvement in Dice scores for certain organs. The code is available at
https://github.com/runminjiang/PW4MedSeg.
Related papers
- 3D Vascular Segmentation Supervised by 2D Annotation of Maximum
Intensity Projection [33.34240545722551]
Vascular structure segmentation plays a crucial role in medical analysis and clinical applications.
Existing weakly supervised methods have exhibited suboptimal performance when handling sparse vascular structure.
Here, we employ maximum intensity projection (MIP) to decrease the dimensionality of 3D volume to 2D image for efficient annotation.
We introduce a weakly-supervised network that fuses 2D-3D deep features via MIP to further improve segmentation performance.
arXiv Detail & Related papers (2024-02-19T13:24:46Z) - CV-Attention UNet: Attention-based UNet for 3D Cerebrovascular Segmentation of Enhanced TOF-MRA Images [2.2265536092123006]
We propose the 3D cerebrovascular attention UNet method, named CV-AttentionUNet, for precise extraction of brain vessel images.
To combine the low and high semantics, we applied the attention mechanism.
We believe that the novelty of this algorithm lies in its ability to perform well on both labeled and unlabeled data.
arXiv Detail & Related papers (2023-11-16T22:31:05Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Dynamic Linear Transformer for 3D Biomedical Image Segmentation [2.440109381823186]
Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks.
Main challenge for 3D transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism.
We propose a novel transformer architecture for 3D medical image segmentation using an encoder-decoder style architecture with linear complexity.
arXiv Detail & Related papers (2022-06-01T21:15:01Z) - Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data
Segmentation [0.0]
We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels.
The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16.
arXiv Detail & Related papers (2020-11-23T17:11:50Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.