Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation
- URL: http://arxiv.org/abs/2012.09373v1
- Date: Thu, 17 Dec 2020 02:54:19 GMT
- Title: Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation
- Authors: Hongxiao Wang, Hao Zheng, Jianxu Chen, Lin Yang, Yizhe Zhang, Danny Z.
Chen
- Abstract summary: Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics.
We propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions.
Our method is evaluated on glands and nuclei datasets.
- Score: 34.45302976822067
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automatic histopathology image segmentation is crucial to disease analysis.
Limited available labeled data hinders the generalizability of trained models
under the fully supervised setting. Semi-supervised learning (SSL) based on
generative methods has been proven to be effective in utilizing diverse image
characteristics. However, it has not been well explored what kinds of generated
images would be more useful for model training and how to use such images. In
this paper, we propose a new data guided generative method for histopathology
image segmentation by leveraging the unlabeled data distributions. First, we
design an image generation module. Image content and style are disentangled and
embedded in a clustering-friendly space to utilize their distributions. New
images are synthesized by sampling and cross-combining contents and styles.
Second, we devise an effective data selection policy for judiciously sampling
the generated images: (1) to make the generated training set better cover the
dataset, the clusters that are underrepresented in the original training set
are covered more; (2) to make the training process more effective, we identify
and oversample the images of "hard cases" in the data for which annotated
training data may be scarce. Our method is evaluated on glands and nuclei
datasets. We show that under both the inductive and transductive settings, our
SSL method consistently boosts the performance of common segmentation models
and attains state-of-the-art results.
Related papers
- Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Nucleus-aware Self-supervised Pretraining Using Unpaired Image-to-image
Translation for Histopathology Images [3.8391355786589805]
We propose a novel nucleus-aware self-supervised pretraining framework for histopathology images.
The framework aims to capture the nuclear morphology and distribution information through unpaired image-to-image translation.
The experiments on 7 datasets show that the proposed pretraining method outperforms supervised ones on Kather classification, multiple instance learning, and 5 dense-prediction tasks.
arXiv Detail & Related papers (2023-09-14T02:31:18Z) - CSP: Self-Supervised Contrastive Spatial Pre-Training for
Geospatial-Visual Representations [90.50864830038202]
We present Contrastive Spatial Pre-Training (CSP), a self-supervised learning framework for geo-tagged images.
We use a dual-encoder to separately encode the images and their corresponding geo-locations, and use contrastive objectives to learn effective location representations from images.
CSP significantly boosts the model performance with 10-34% relative improvement with various labeled training data sampling ratios.
arXiv Detail & Related papers (2023-05-01T23:11:18Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Improving GAN Training via Feature Space Shrinkage [69.98365478398593]
We propose AdaptiveMix, which shrinks regions of training data in the image representation space of the discriminator.
Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples.
The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples.
arXiv Detail & Related papers (2023-03-02T20:22:24Z) - CellMix: A General Instance Relationship based Method for Data
Augmentation Towards Pathology Image Classification [6.9596321268519326]
In pathology image analysis, obtaining and maintaining high-quality annotated samples is an extremely labor-intensive task.
We propose the CellMix framework, which employs a novel distribution-oriented in-place shuffle approach.
Our experiments in pathology image classification tasks demonstrate state-of-the-art (SOTA) performance on 7 distinct datasets.
arXiv Detail & Related papers (2023-01-27T03:17:35Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Cut-Paste Consistency Learning for Semi-Supervised Lesion Segmentation [0.20305676256390934]
Semi-supervised learning has the potential to improve the data-efficiency of training data-hungry deep neural networks.
We present a simple semi-supervised learning method for lesion segmentation tasks based on the ideas of cut-paste augmentation and consistency regularization.
arXiv Detail & Related papers (2022-10-01T04:43:54Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Self-Supervised Generative Style Transfer for One-Shot Medical Image
Segmentation [10.634870214944055]
In medical image segmentation, supervised deep networks' success comes at the cost of requiring abundant labeled data.
We propose a novel volumetric self-supervised learning for data augmentation capable of synthesizing volumetric image-segmentation pairs.
Our work's central tenet benefits from a combined view of one-shot generative learning and the proposed self-supervised training strategy.
arXiv Detail & Related papers (2021-10-05T15:28:42Z) - Uncertainty guided semi-supervised segmentation of retinal layers in OCT
images [4.046207281399144]
We propose a novel uncertainty-guided semi-supervised learning based on a student-teacher approach for training the segmentation network.
The proposed framework is a key contribution and applicable for biomedical image segmentation across various imaging modalities.
arXiv Detail & Related papers (2021-03-02T23:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.