Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels
- URL: http://arxiv.org/abs/2107.13741v1
- Date: Thu, 29 Jul 2021 04:30:46 GMT
- Title: Self-Paced Contrastive Learning for Semi-supervisedMedical Image
Segmentation with Meta-labels
- Authors: Jizong Peng, Ping Wang, Chrisitian Desrosiers, Marco Pedersoli
- Abstract summary: We propose to adapt contrastive learning to work with meta-label annotations.
We use the meta-labels for pre-training the image encoder as well as to regularize a semi-supervised training.
Results on three different medical image segmentation datasets show that our approach highly boosts the performance of a model trained on a few scans.
- Score: 6.349708371894538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-training a recognition model with contrastive learning on a large dataset
of unlabeled data has shown great potential to boost the performance of a
downstream task, e.g., image classification. However, in domains such as
medical imaging, collecting unlabeled data can be challenging and expensive. In
this work, we propose to adapt contrastive learning to work with meta-label
annotations, for improving the model's performance in medical image
segmentation even when no additional unlabeled data is available. Meta-labels
such as the location of a 2D slice in a 3D MRI scan or the type of device used,
often come for free during the acquisition process. We use the meta-labels for
pre-training the image encoder as well as to regularize a semi-supervised
training, in which a reduced set of annotated data is used for training.
Finally, to fully exploit the weak annotations, a self-paced learning approach
is used to help the learning and discriminate useful labels from noise. Results
on three different medical image segmentation datasets show that our approach:
i) highly boosts the performance of a model trained on a few scans, ii)
outperforms previous contrastive and semi-supervised approaches, and iii)
reaches close to the performance of a model trained on the full data.
Related papers
- Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition [10.317372960942972]
We propose an innovative approach by integrating self-supervised learning into semi-supervised models to enhance medical image recognition.
Our approach optimally leverages unlabeled data, outperforming existing methods in terms of accuracy for medical image recognition.
arXiv Detail & Related papers (2024-04-16T09:12:16Z) - Pseudo Label-Guided Data Fusion and Output Consistency for
Semi-Supervised Medical Image Segmentation [9.93871075239635]
We propose the PLGDF framework, which builds upon the mean teacher network for segmenting medical images with less annotation.
We propose a novel pseudo-label utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively.
Our framework yields superior performance compared to six state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2023-11-17T06:36:43Z) - Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for
Semi-Supervised Medical Image Segmentation [13.707121013895929]
We present a novel semi-supervised learning method, Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation.
We use distinct decoders for student and teacher networks while maintain the same encoder.
To learn from unlabeled data, we create pseudo-labels generated by the teacher networks and augment the training data with the pseudo-labels.
arXiv Detail & Related papers (2023-08-31T09:13:34Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Triple-View Feature Learning for Medical Image Segmentation [9.992387025633805]
TriSegNet is a semi-supervised semantic segmentation framework.
It uses triple-view feature learning on a limited amount of labelled data and a large amount of unlabeled data.
arXiv Detail & Related papers (2022-08-12T14:41:40Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment [73.61888777504377]
Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
arXiv Detail & Related papers (2022-04-19T09:10:06Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.