MMGL: Multi-Scale Multi-View Global-Local Contrastive learning for
Semi-supervised Cardiac Image Segmentation
- URL: http://arxiv.org/abs/2207.01883v1
- Date: Tue, 5 Jul 2022 08:24:46 GMT
- Title: MMGL: Multi-Scale Multi-View Global-Local Contrastive learning for
Semi-supervised Cardiac Image Segmentation
- Authors: Ziyuan Zhao, Jinxuan Hu, Zeng Zeng, Xulei Yang, Peisheng Qian,
Bharadwaj Veeravalli, Cuntai Guan
- Abstract summary: We propose a novel multi-scale multi-view global-local contrastive learning framework for medical image segmentation.
Experiments on the MM-WHS dataset demonstrate the effectiveness of MMGL framework on semi-supervised cardiac image segmentation.
- Score: 18.275478722238123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With large-scale well-labeled datasets, deep learning has shown significant
success in medical image segmentation. However, it is challenging to acquire
abundant annotations in clinical practice due to extensive expertise
requirements and costly labeling efforts. Recently, contrastive learning has
shown a strong capacity for visual representation learning on unlabeled data,
achieving impressive performance rivaling supervised learning in many domains.
In this work, we propose a novel multi-scale multi-view global-local
contrastive learning (MMGL) framework to thoroughly explore global and local
features from different scales and views for robust contrastive learning
performance, thereby improving segmentation performance with limited
annotations. Extensive experiments on the MM-WHS dataset demonstrate the
effectiveness of MMGL framework on semi-supervised cardiac image segmentation,
outperforming the state-of-the-art contrastive learning methods by a large
margin.
Related papers
- ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features [54.37042005469384]
We announce MVKL, the first multimodal mammography dataset encompassing multi-view images, detailed manifestations and reports.
Based on this dataset, we focus on the challanging task of unsupervised pretraining.
We propose ViKL, a framework that synergizes Visual, Knowledge, and Linguistic features.
arXiv Detail & Related papers (2024-09-24T05:01:23Z) - LESEN: Label-Efficient deep learning for Multi-parametric MRI-based
Visual Pathway Segmentation [5.726588626363204]
We propose a label-efficient deep learning method with self-ensembling (LESEN)
LESEN incorporates supervised and unsupervised losses, enabling the student and teacher models to mutually learn from each other.
Our experiments on the human connectome project (HCP) dataset demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-01-03T10:22:13Z) - Multi-Scale Cross Contrastive Learning for Semi-Supervised Medical Image
Segmentation [14.536384387956527]
We develop a novel Multi-Scale Cross Supervised Contrastive Learning framework to segment structures in medical images.
Our approach contrasts multi-scale features based on ground-truth and cross-predicted labels, in order to extract robust feature representations.
It outperforms state-of-the-art semi-supervised methods by more than 3.0% in Dice.
arXiv Detail & Related papers (2023-06-25T16:55:32Z) - Scribble-supervised Cell Segmentation Using Multiscale Contrastive
Regularization [9.849498498869258]
Scribble2Label (S2L) demonstrated that using only a handful of scribbles with self-supervised learning can generate accurate segmentation results without full annotation.
In this work, we employ a novel multiscale contrastive regularization term for S2L.
The main idea is to extract features from intermediate layers of the neural network for contrastive loss so that structures at various scales can be effectively separated.
arXiv Detail & Related papers (2023-06-25T06:00:33Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.