Online Easy Example Mining for Weakly-supervised Gland Segmentation from
Histology Images
- URL: http://arxiv.org/abs/2206.06665v1
- Date: Tue, 14 Jun 2022 07:53:03 GMT
- Title: Online Easy Example Mining for Weakly-supervised Gland Segmentation from
Histology Images
- Authors: Yi Li, Yiduo Yu, Yiwen Zou, Tianqi Xiang, Xiaomeng Li
- Abstract summary: Developing an AI-assisted gland segmentation method from histology images is critical for automatic cancer diagnosis and prognosis.
Existing weakly-supervised semantic segmentation methods in computer vision achieve degenerative results for gland segmentation.
We propose a novel method Online Easy Example Mining (OEEM) that encourages the network to focus on credible supervision signals.
- Score: 10.832913704956253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing an AI-assisted gland segmentation method from histology images is
critical for automatic cancer diagnosis and prognosis; however, the high cost
of pixel-level annotations hinders its applications to broader diseases.
Existing weakly-supervised semantic segmentation methods in computer vision
achieve degenerative results for gland segmentation, since the characteristics
and problems of glandular datasets are different from general object datasets.
We observe that, unlike natural images, the key problem with histology images
is the confusion of classes owning to morphological homogeneity and low color
contrast among different tissues. To this end, we propose a novel method Online
Easy Example Mining (OEEM) that encourages the network to focus on credible
supervision signals rather than noisy signals, therefore mitigating the
influence of inevitable false predictions in pseudo-masks. According to the
characteristics of glandular datasets, we design a strong framework for gland
segmentation. Our results exceed many fully-supervised methods and
weakly-supervised methods for gland segmentation over 4.4% and 6.04% at mIoU,
respectively. Code is available at https://github.com/xmed-lab/OEEM.
Related papers
- Semantic Segmentation Refiner for Ultrasound Applications with Zero-Shot Foundation Models [1.8142288667655782]
We propose a prompt-less segmentation method harnessing the ability of segmentation foundation models to segment abstract shapes.
Our method's advantages are brought to light in experiments on a small-scale musculoskeletal ultrasound images dataset.
arXiv Detail & Related papers (2024-04-25T04:21:57Z) - COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - Ultrasound Image Segmentation of Thyroid Nodule via Latent Semantic
Feature Co-Registration [12.211161441766532]
The present paper proposes ASTN, a framework for thyroid nodule segmentation achieved through a new type co-registration network.
By extracting latent semantic information from the atlas and target images, this framework can ensure the integrity of anatomical structure.
This paper also provides an atlas selection algorithm to mitigate the difficulty of co-registration.
arXiv Detail & Related papers (2023-10-13T16:18:48Z) - Multi-Level Global Context Cross Consistency Model for Semi-Supervised
Ultrasound Image Segmentation with Diffusion Model [0.0]
We propose a framework that uses images generated by a Latent Diffusion Model (LDM) as unlabeled images for semi-supervised learning.
Our approach enables the effective transfer of probability distribution knowledge to the segmentation network, resulting in improved segmentation accuracy.
arXiv Detail & Related papers (2023-05-16T14:08:24Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Learning of Inter-Label Geometric Relationships Using Self-Supervised
Learning: Application To Gleason Grade Segmentation [4.898744396854313]
We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels.
We use a weakly supervised segmentation approach that uses Gleason score to segment the diseased regions.
The resulting segmentation map is used to train a Shape Restoration Network (ShaRe-Net) to predict missing mask segments.
arXiv Detail & Related papers (2021-10-01T13:47:07Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.