IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised
Medical Image Segmentation
- URL: http://arxiv.org/abs/2210.15075v1
- Date: Wed, 26 Oct 2022 23:11:02 GMT
- Title: IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised
Medical Image Segmentation
- Authors: Hritam Basak, Soumitri Chattopadhyay, Rohit Kundu, Sayan Nag, Rammohan
Mallipeddi
- Abstract summary: We extend the concept of metric learning to the segmentation task.
We propose a simple convolutional projection head for obtaining dense pixel-level features.
A bidirectional regularization mechanism involving two-stream regularization training is devised for the downstream task.
- Score: 3.6748639131154315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the scarcity of labeled data, Contrastive Self-Supervised Learning
(SSL) frameworks have lately shown great potential in several medical image
analysis tasks. However, the existing contrastive mechanisms are sub-optimal
for dense pixel-level segmentation tasks due to their inability to mine local
features. To this end, we extend the concept of metric learning to the
segmentation task, using a dense (dis)similarity learning for pre-training a
deep encoder network, and employing a semi-supervised paradigm to fine-tune for
the downstream task. Specifically, we propose a simple convolutional projection
head for obtaining dense pixel-level features, and a new contrastive loss to
utilize these dense projections thereby improving the local representations. A
bidirectional consistency regularization mechanism involving two-stream model
training is devised for the downstream task. Upon comparison, our IDEAL method
outperforms the SoTA methods by fair margins on cardiac MRI segmentation.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Multi-level Asymmetric Contrastive Learning for Volumetric Medical Image Segmentation Pre-training [18.01020160596681]
We propose a novel contrastive learning framework named MACL for volumetric medical image segmentation pre-training.
Experiments on 12 medical image datasets indicate our MACL framework outperforms existing 11 contrastive learning strategies.
arXiv Detail & Related papers (2023-09-21T08:22:44Z) - Semi-supervised Domain Adaptive Medical Image Segmentation through
Consistency Regularized Disentangled Contrastive Learning [11.049672162852733]
In this work, we investigate relatively less explored semi-supervised domain adaptation (SSDA) for medical image segmentation.
We propose a two-stage training process: first, an encoder is pre-trained in a self-learning paradigm using a novel domain-content disentangled contrastive learning (CL) along with a pixel-level feature consistency constraint.
We experimentally validate and validate our proposed method can easily be extended for UDA settings, adding to the superiority of the proposed strategy.
arXiv Detail & Related papers (2023-07-06T06:13:22Z) - Robust and Efficient Segmentation of Cross-domain Medical Images [37.38861543166964]
We propose a generalizable knowledge distillation method for robust and efficient segmentation of medical images.
We propose two generalizable knowledge distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD)
In DICD, the domain-invariant semantic vectors from the two models (i.e., teacher and student) are leveraged to cross-reconstruct features by the header exchange of MSAN.
arXiv Detail & Related papers (2022-07-26T15:55:36Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Boosting Semi-supervised Image Segmentation with Global and Local Mutual
Information Regularization [9.994508738317585]
We present a novel semi-supervised segmentation method that leverages mutual information (MI) on categorical distributions.
We evaluate the method on three challenging publicly-available datasets for medical image segmentation.
arXiv Detail & Related papers (2021-03-08T15:13:25Z) - Semi-supervised Left Atrium Segmentation with Mutual Consistency
Training [60.59108570938163]
We propose a novel Mutual Consistency Network (MC-Net) for semi-supervised left atrium segmentation from 3D MR images.
Our MC-Net consists of one encoder and two slightly different decoders, and the prediction discrepancies of two decoders are transformed as an unsupervised loss.
We evaluate our MC-Net on the public Left Atrium (LA) database and it obtains impressive performance gains by exploiting the unlabeled data effectively.
arXiv Detail & Related papers (2021-03-04T09:34:32Z) - Dense Contrastive Learning for Self-Supervised Visual Pre-Training [102.15325936477362]
We present dense contrastive learning, which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images.
Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only 1% slower)
arXiv Detail & Related papers (2020-11-18T08:42:32Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Contrastive learning of global and local features for medical image
segmentation with limited annotations [10.238403787504756]
A key requirement for the success of supervised deep learning is a large labeled dataset.
We propose strategies for extending the contrastive learning framework for segmentation of medical images in the semi-supervised setting.
In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques.
arXiv Detail & Related papers (2020-06-18T13:31:26Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.