Semi-supervised few-shot learning for medical image segmentation
- URL: http://arxiv.org/abs/2003.08462v2
- Date: Thu, 9 Apr 2020 01:40:33 GMT
- Title: Semi-supervised few-shot learning for medical image segmentation
- Authors: Abdur R Feyjie, Reza Azad, Marco Pedersoli, Claude Kauffman, Ismail
Ben Ayed, Jose Dolz
- Abstract summary: Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
- Score: 21.349705243254423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed the great progress of deep neural networks on
semantic segmentation, particularly in medical imaging. Nevertheless, training
high-performing models require large amounts of pixel-level ground truth masks,
which can be prohibitive to obtain in the medical domain. Furthermore, training
such models in a low-data regime highly increases the risk of overfitting.
Recent attempts to alleviate the need for large annotated datasets have
developed training strategies under the few-shot learning paradigm, which
addresses this shortcoming by learning a novel class from only a few labeled
examples. In this context, a segmentation model is trained on episodes, which
represent different segmentation problems, each of them trained with a very
small labeled dataset. In this work, we propose a novel few-shot learning
framework for semantic segmentation, where unlabeled images are also made
available at each episode. To handle this new learning paradigm, we propose to
include surrogate tasks that can leverage very powerful supervisory signals
--derived from the data itself-- for semantic feature learning. We show that
including unlabeled surrogate tasks in the episodic training leads to more
powerful feature representations, which ultimately results in better
generability to unseen tasks. We demonstrate the efficiency of our method in
the task of skin lesion segmentation in two publicly available datasets.
Furthermore, our approach is general and model-agnostic, which can be combined
with different deep architectures.
Related papers
- A Classifier-Free Incremental Learning Framework for Scalable Medical Image Segmentation [6.591403935303867]
We introduce a novel segmentation paradigm enabling the segmentation of a variable number of classes within a single classifier-free network.
This network is trained using contrastive learning and produces discriminative feature representations that facilitate straightforward interpretation.
We demonstrate the flexibility of our method in handling varying class numbers within a unified network and its capacity for incremental learning.
arXiv Detail & Related papers (2024-05-25T19:05:07Z) - Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation [17.69933345468061]
scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-11-20T11:35:52Z) - Explainable Semantic Medical Image Segmentation with Style [7.074258860680265]
We propose a fully supervised generative framework that can achieve generalisable segmentation with only limited labelled data.
The proposed approach creates medical image style paired with a segmentation task driven discriminator incorporating end-to-end adversarial training.
Experiments on a fully semantic, publicly available pelvis dataset demonstrated that our method is more generalisable to shifts than other state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T04:34:51Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - A Few Guidelines for Incremental Few-Shot Segmentation [57.34237650765928]
Given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones.
We show how the main problems of end-to-end training in this scenario are.
i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and.
ii) the forgetting of old classes, that we can fix with regularization strategies.
arXiv Detail & Related papers (2020-11-30T20:45:56Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences
for Urban Scene Segmentation [57.68890534164427]
In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences and extra images to improve the performance on urban scene segmentation.
We simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data.
Our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks.
arXiv Detail & Related papers (2020-05-20T18:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.