Consistency Regularisation in Varying Contexts and Feature Perturbations
for Semi-Supervised Semantic Segmentation of Histology Images
- URL: http://arxiv.org/abs/2301.13141v1
- Date: Mon, 30 Jan 2023 18:21:57 GMT
- Title: Consistency Regularisation in Varying Contexts and Feature Perturbations
for Semi-Supervised Semantic Segmentation of Histology Images
- Authors: Raja Muhammad Saad Bashir, Talha Qaiser, Shan E Ahmed Raza, Nasir M.
Rajpoot
- Abstract summary: We present a consistency based semi-supervised learning (SSL) approach that can help mitigate this challenge.
SSL models might also be susceptible to changing context and features perturbations exhibiting poor generalisation due to the limited training data.
We show that cross-consistency training makes the encoder features invariant to different perturbations and improves the prediction confidence.
- Score: 14.005379068469361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic segmentation of various tissue and nuclei types in histology images
is fundamental to many downstream tasks in the area of computational pathology
(CPath). In recent years, Deep Learning (DL) methods have been shown to perform
well on segmentation tasks but DL methods generally require a large amount of
pixel-wise annotated data. Pixel-wise annotation sometimes requires expert's
knowledge and time which is laborious and costly to obtain. In this paper, we
present a consistency based semi-supervised learning (SSL) approach that can
help mitigate this challenge by exploiting a large amount of unlabelled data
for model training thus alleviating the need for a large annotated dataset.
However, SSL models might also be susceptible to changing context and features
perturbations exhibiting poor generalisation due to the limited training data.
We propose an SSL method that learns robust features from both labelled and
unlabelled images by enforcing consistency against varying contexts and feature
perturbations. The proposed method incorporates context-aware consistency by
contrasting pairs of overlapping images in a pixel-wise manner from changing
contexts resulting in robust and context invariant features. We show that
cross-consistency training makes the encoder features invariant to different
perturbations and improves the prediction confidence. Finally, entropy
minimisation is employed to further boost the confidence of the final
prediction maps from unlabelled data. We conduct an extensive set of
experiments on two publicly available large datasets (BCSS and MoNuSeg) and
show superior performance compared to the state-of-the-art methods.
Related papers
- Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Translation Consistent Semi-supervised Segmentation for 3D Medical
Images [25.575275962514898]
3D medical image segmentation methods have been successful, but their dependence on large amounts of voxel-level data is a disadvantage.
Semi-supervised learning (SSL) solve this issue by training models with a large unlabelled and a small labelled dataset.
We introduce the Translation Consistent Co-training (TraCoCo) which is a consistency learning SSL method.
arXiv Detail & Related papers (2022-03-28T06:31:39Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Learning from Partially Overlapping Labels: Image Segmentation under
Annotation Shift [68.6874404805223]
We propose several strategies for learning from partially overlapping labels in the context of abdominal organ segmentation.
We find that combining a semi-supervised approach with an adaptive cross entropy loss can successfully exploit heterogeneously annotated data.
arXiv Detail & Related papers (2021-07-13T09:22:24Z) - Semi-supervised Semantic Segmentation with Directional Context-aware
Consistency [66.49995436833667]
We focus on the semi-supervised segmentation problem where only a small set of labeled data is provided with a much larger collection of totally unlabeled images.
A preferred high-level representation should capture the contextual information while not losing self-awareness.
We present the Directional Contrastive Loss (DC Loss) to accomplish the consistency in a pixel-to-pixel manner.
arXiv Detail & Related papers (2021-06-27T03:42:40Z) - Weakly supervised segmentation with cross-modality equivariant
constraints [7.757293476741071]
Weakly supervised learning has emerged as an appealing alternative to alleviate the need for large labeled datasets in semantic segmentation.
We present a novel learning strategy that leverages self-supervision in a multi-modal image scenario to significantly enhance original CAMs.
Our approach outperforms relevant recent literature under the same learning conditions.
arXiv Detail & Related papers (2021-04-06T13:14:20Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Deep Semantic Matching with Foreground Detection and Cycle-Consistency [103.22976097225457]
We address weakly supervised semantic matching based on a deep network.
We explicitly estimate the foreground regions to suppress the effect of background clutter.
We develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent.
arXiv Detail & Related papers (2020-03-31T22:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.