Label-efficient Multi-organ Segmentation Method with Diffusion Model
- URL: http://arxiv.org/abs/2402.15216v1
- Date: Fri, 23 Feb 2024 09:25:57 GMT
- Title: Label-efficient Multi-organ Segmentation Method with Diffusion Model
- Authors: Yongzhi Huang, Jinxin Zhu, Haseeb Hassan, Liyilei Su, Jingyu Li, and
Binding Huang
- Abstract summary: We present a label-efficient learning approach using a pre-trained diffusion model for multi-organ segmentation tasks in CT images.
Our method achieves competitive multi-organ segmentation performance compared to state-of-the-art methods on the FLARE 2022 dataset.
- Score: 6.413416851085592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of multiple organs in Computed Tomography (CT) images
plays a vital role in computer-aided diagnosis systems. Various
supervised-learning approaches have been proposed recently. However, these
methods heavily depend on a large amount of high-quality labeled data, which is
expensive to obtain in practice. In this study, we present a label-efficient
learning approach using a pre-trained diffusion model for multi-organ
segmentation tasks in CT images. First, a denoising diffusion model was trained
using unlabeled CT data, generating additional two-dimensional (2D) CT images.
Then the pre-trained denoising diffusion network was transferred to the
downstream multi-organ segmentation task, effectively creating a
semi-supervised learning model that requires only a small amount of labeled
data. Furthermore, linear classification and fine-tuning decoder strategies
were employed to enhance the network's segmentation performance. Our generative
model at 256x256 resolution achieves impressive performance in terms of
Fr\'echet inception distance, spatial Fr\'echet inception distance, and
F1-score, with values of 11.32, 46.93, and 73.1\%, respectively. These results
affirm the diffusion model's ability to generate diverse and realistic 2D CT
images. Additionally, our method achieves competitive multi-organ segmentation
performance compared to state-of-the-art methods on the FLARE 2022 dataset,
particularly in limited labeled data scenarios. Remarkably, even with only 1\%
and 10\% labeled data, our method achieves Dice similarity coefficients (DSCs)
of 71.56\% and 78.51\% after fine-tuning, respectively. The method achieves a
DSC score of 51.81\% using just four labeled CT scans. These results
demonstrate the efficacy of our approach in overcoming the limitations of
supervised learning heavily reliant on large-scale labeled data.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Wound Tissue Segmentation in Diabetic Foot Ulcer Images Using Deep Learning: A Pilot Study [5.397013836968946]
We have created a DFUTissue dataset for the research community to evaluate wound tissue segmentation algorithms.
The dataset contains 110 images with tissues labeled by wound experts and 600 unlabeled images.
Due to the limited amount of annotated data, our framework consists of both supervised learning (SL) and semi-supervised learning (SSL) phases.
arXiv Detail & Related papers (2024-06-23T05:01:51Z) - CriDiff: Criss-cross Injection Diffusion Framework via Generative Pre-train for Prostate Segmentation [60.61972883059688]
CriDiff is a two-stage feature injecting framework with a Crisscross Injection Strategy (CIS) and a Generative Pre-train (GP) approach for prostate segmentation.
To effectively learn multi-level of edge features and non-edge features, we proposed two parallel conditioners in the CIS.
The GP approach eases the inconsistency between the images features and the diffusion model without adding additional parameters.
arXiv Detail & Related papers (2024-06-20T10:46:50Z) - A Closer Look at Spatial-Slice Features Learning for COVID-19 Detection [8.215897530386343]
We introduce an enhanced Spatial-Slice Feature Learning (SSFL++) framework specifically designed for CT scan.
It aim to filter out a OOD data within whole CT scan, enabling our to select crucial spatial-slice for analysis by reducing 70% redundancy totally.
Experiments demonstrate the promising performance of our model using a simple EfficientNet-2D (E2D) model, even with only 1% of the training data.
arXiv Detail & Related papers (2024-04-02T05:19:27Z) - Simple 2D Convolutional Neural Network-based Approach for COVID-19 Detection [8.215897530386343]
This study explores the use of deep learning techniques for analyzing lung Computed Tomography (CT) images.
We propose an advanced Spatial-Slice Feature Learning (SSFL++) framework specifically tailored for CT scans.
It aims to filter out out out-of-distribution (OOD) data within the entire CT scan, allowing us to select essential spatial-slice features for analysis by reducing data redundancy by 70%.
arXiv Detail & Related papers (2024-03-17T14:34:51Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - A Unified Framework for Generalized Low-Shot Medical Image Segmentation
with Scarce Data [24.12765716392381]
We propose a unified framework for generalized low-shot (one- and few-shot) medical image segmentation based on distance metric learning (DML)
Via DML, the framework learns a multimodal mixture representation for each category, and performs dense predictions based on cosine distances between the pixels' deep embeddings and the category representations.
In our experiments on brain MRI and abdominal CT datasets, the proposed framework achieves superior performances for low-shot segmentation towards standard DNN-based (3D U-Net) and classical registration-based (ANTs) methods.
arXiv Detail & Related papers (2021-10-18T13:01:06Z) - Boosting Segmentation Performance across datasets using histogram
specification with application to pelvic bone segmentation [1.3750624267664155]
We propose a methodology based on modulation of image tonal distributions and deep learning to boost the performance of networks trained on limited data.
The segmentation task uses a U-Net configuration with an EfficientNet-B0 backbone, optimized using an augmented BCE-IoU loss function.
arXiv Detail & Related papers (2021-01-26T23:48:40Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.