DenseMP: Unsupervised Dense Pre-training for Few-shot Medical Image
Segmentation
- URL: http://arxiv.org/abs/2307.09604v1
- Date: Thu, 13 Jul 2023 15:18:15 GMT
- Title: DenseMP: Unsupervised Dense Pre-training for Few-shot Medical Image
Segmentation
- Authors: Zhaoxin Fan, Puquan Pan, Zeren Zhang, Ce Chen, Tianyang Wang, Siyang
Zheng, Min Xu
- Abstract summary: Few-shot medical image semantic segmentation is of paramount importance in the domain of medical image analysis.
We introduce a novel Unsupervised Few-shot Medical Image Model Training Pipeline (DenseMP) that capitalizes on unsupervised dense pre-training.
Our proposed pipeline significantly enhances the performance of the widely recognized few-shot segmentation model, PA-Net.
- Score: 6.51140268845611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot medical image semantic segmentation is of paramount importance in
the domain of medical image analysis. However, existing methodologies grapple
with the challenge of data scarcity during the training phase, leading to
over-fitting. To mitigate this issue, we introduce a novel Unsupervised Dense
Few-shot Medical Image Segmentation Model Training Pipeline (DenseMP) that
capitalizes on unsupervised dense pre-training. DenseMP is composed of two
distinct stages: (1) segmentation-aware dense contrastive pre-training, and (2)
few-shot-aware superpixel guided dense pre-training. These stages
collaboratively yield a pre-trained initial model specifically designed for
few-shot medical image segmentation, which can subsequently be fine-tuned on
the target dataset. Our proposed pipeline significantly enhances the
performance of the widely recognized few-shot segmentation model, PA-Net,
achieving state-of-the-art results on the Abd-CT and Abd-MRI datasets. Code
will be released after acceptance.
Related papers
- Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Pre-Training with Diffusion models for Dental Radiography segmentation [0.0]
We propose a straightforward pre-training method for semantic segmentation.
Our approach achieves remarkable performance in terms of label efficiency.
Our experimental results on the segmentation of dental radiographs demonstrate that the proposed method is competitive with state-of-the-art pre-training methods.
arXiv Detail & Related papers (2023-07-26T09:33:24Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for
Slice-Direction Continuous Cross-Modality Medical Image Segmentation [8.33996223844639]
We propose SDC-UDA, a framework for slice-direction continuous cross-modality medical image segmentation.
It combines intra- and inter-slice self-attentive image translation, uncertainty-constrained pseudo-label refinement, and volumetric self-training.
We validate SDC-UDA with multiple publicly available cross-modality medical image segmentation datasets and achieve state-of-the-art segmentation performance.
arXiv Detail & Related papers (2023-05-18T14:44:27Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Data-Limited Tissue Segmentation using Inpainting-Based Self-Supervised
Learning [3.7931881761831328]
Self-supervised learning (SSL) methods involving pretext tasks have shown promise in overcoming this requirement by first pretraining models using unlabeled data.
We evaluate the efficacy of two SSL methods (inpainting-based pretext tasks of context prediction and context restoration) for CT and MRI image segmentation in label-limited scenarios.
We demonstrate that optimally trained and easy-to-implement SSL segmentation models can outperform classically supervised methods for MRI and CT tissue segmentation in label-limited scenarios.
arXiv Detail & Related papers (2022-10-14T16:34:05Z) - Mixed-UNet: Refined Class Activation Mapping for Weakly-Supervised
Semantic Segmentation with Multi-scale Inference [28.409679398886304]
We develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase.
We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets.
arXiv Detail & Related papers (2022-05-06T08:37:02Z) - PoissonSeg: Semi-Supervised Few-Shot Medical Image Segmentation via
Poisson Learning [0.505645669728935]
Few-shot Semantic (FSS) is a promising strategy for breaking the deadlock in deep learning.
FSS model still requires sufficient pixel-level annotated classes for training to avoid overfitting.
We propose a novel semi-supervised FSS framework for medical image segmentation.
arXiv Detail & Related papers (2021-08-26T10:24:04Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.