Self-Supervision with Superpixels: Training Few-shot Medical Image
Segmentation without Annotation
- URL: http://arxiv.org/abs/2007.09886v2
- Date: Tue, 6 Oct 2020 21:36:05 GMT
- Title: Self-Supervision with Superpixels: Training Few-shot Medical Image
Segmentation without Annotation
- Authors: Cheng Ouyang, Carlo Biffi, Chen Chen, Turkay Kart, Huaqi Qiu, Daniel
Rueckert
- Abstract summary: Few-shot semantic segmentation has great potential for medical imaging applications.
Most of the existing FSS techniques require abundant annotated semantic classes for training.
We propose a novel self-supervised FSS framework for medical images in order to eliminate the requirement for annotations during training.
- Score: 12.47837000630753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot semantic segmentation (FSS) has great potential for medical imaging
applications. Most of the existing FSS techniques require abundant annotated
semantic classes for training. However, these methods may not be applicable for
medical images due to the lack of annotations. To address this problem we make
several contributions: (1) A novel self-supervised FSS framework for medical
images in order to eliminate the requirement for annotations during training.
Additionally, superpixel-based pseudo-labels are generated to provide
supervision; (2) An adaptive local prototype pooling module plugged into
prototypical networks, to solve the common challenging foreground-background
imbalance problem in medical image segmentation; (3) We demonstrate the general
applicability of the proposed approach for medical images using three different
tasks: abdominal organ segmentation for CT and MRI, as well as cardiac
segmentation for MRI. Our results show that, for medical image segmentation,
the proposed method outperforms conventional FSS methods which require manual
annotations for training.
Related papers
- Retrieval-augmented Few-shot Medical Image Segmentation with Foundation Models [17.461510586128874]
We propose a novel method that adapts DINOv2 and Segment Anything Model 2 for retrieval-augmented few-shot medical image segmentation.
Our approach uses DINOv2's feature as query to retrieve similar samples from limited annotated data, which are then encoded as memories and stored in memory bank.
arXiv Detail & Related papers (2024-08-16T15:48:07Z) - SM2C: Boost the Semi-supervised Segmentation for Medical Image by using Meta Pseudo Labels and Mixed Images [13.971120210536995]
We introduce Scaling-up Mix with Multi-Class (SM2C) to improve the ability to learn semantic features within medical images.
By diversifying the shape of the segmentation objects and enriching the semantic information within each sample, the SM2C demonstrates its potential.
The proposed framework shows significant improvements over state-of-the-art counterparts.
arXiv Detail & Related papers (2024-03-24T04:39:40Z) - GMISeg: General Medical Image Segmentation without Re-Training [6.6467547151592505]
Deep learning models often struggle to be generalisable to unknown tasks involving new anatomical structures, labels, or shapes.
Here I developed a general model that can solve unknown medical image segmentation tasks without requiring additional training.
I evaluated the performance of the proposed method on medical image datasets with different imaging modalities and anatomical structures.
arXiv Detail & Related papers (2023-11-21T11:33:15Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Few Shot Medical Image Segmentation with Cross Attention Transformer [30.54965157877615]
We propose a novel framework for few-shot medical image segmentation, termed CAT-Net.
Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information.
We validated the proposed method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI.
arXiv Detail & Related papers (2023-03-24T09:10:14Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - Few-shot Medical Image Segmentation with Cycle-resemblance Attention [20.986884555902183]
Few-shot learning has gained increasing attention in the medical image semantic segmentation field.
In this paper, we propose a novel self-supervised few-shot medical image segmentation network.
We introduce a novel Cycle-Resemblance Attention (CRA) module to fully leverage the pixel-wise relation between query and support medical images.
arXiv Detail & Related papers (2022-12-07T21:55:26Z) - PoissonSeg: Semi-Supervised Few-Shot Medical Image Segmentation via
Poisson Learning [0.505645669728935]
Few-shot Semantic (FSS) is a promising strategy for breaking the deadlock in deep learning.
FSS model still requires sufficient pixel-level annotated classes for training to avoid overfitting.
We propose a novel semi-supervised FSS framework for medical image segmentation.
arXiv Detail & Related papers (2021-08-26T10:24:04Z) - Domain Generalization on Medical Imaging Classification using Episodic
Training with Task Augmentation [62.49837463676111]
We propose a novel scheme of episodic training with task augmentation on medical imaging classification.
Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting.
arXiv Detail & Related papers (2021-06-13T03:56:59Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.