Multiple Instance Learning with Mixed Supervision in Gleason Grading
- URL: http://arxiv.org/abs/2206.12798v1
- Date: Sun, 26 Jun 2022 06:28:44 GMT
- Title: Multiple Instance Learning with Mixed Supervision in Gleason Grading
- Authors: Hao Bian, Zhuchen Shao, Yang Chen, Yifeng Wang, Haoqian Wang, Jian
Zhang, Yongbing Zhang
- Abstract summary: We propose a mixed supervision Transformer based on the multiple instance learning framework.
The model utilizes both slide-level label and instance-level labels to achieve more accurate Gleason grading at the slide level.
We achieve the state-of-the-art performance on the SICAPv2 dataset, and the visual analysis shows the accurate prediction results of instance level.
- Score: 19.314029297579577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of computational pathology, deep learning methods for
Gleason grading through whole slide images (WSIs) have excellent prospects.
Since the size of WSIs is extremely large, the image label usually contains
only slide-level label or limited pixel-level labels. The current mainstream
approach adopts multi-instance learning to predict Gleason grades. However,
some methods only considering the slide-level label ignore the limited
pixel-level labels containing rich local information. Furthermore, the method
of additionally considering the pixel-level labels ignores the inaccuracy of
pixel-level labels. To address these problems, we propose a mixed supervision
Transformer based on the multiple instance learning framework. The model
utilizes both slide-level label and instance-level labels to achieve more
accurate Gleason grading at the slide level. The impact of inaccurate
instance-level labels is further reduced by introducing an efficient random
masking strategy in the mixed supervision training process. We achieve the
state-of-the-art performance on the SICAPv2 dataset, and the visual analysis
shows the accurate prediction results of instance level. The source code is
available at https://github.com/bianhao123/Mixed_supervision.
Related papers
- Superpixelwise Low-rank Approximation based Partial Label Learning for Hyperspectral Image Classification [19.535446654147126]
Insufficient prior knowledge of a captured hyperspectral image (HSI) scene may lead the experts or the automatic labeling systems to offer incorrect labels or ambiguous labels.
We propose a novel superpixelwise low-rank approximation (LRA)-based partial label learning method, namely SLAP.
arXiv Detail & Related papers (2024-05-27T12:26:49Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Distilling Self-Supervised Vision Transformers for Weakly-Supervised
Few-Shot Classification & Segmentation [58.03255076119459]
We address the task of weakly-supervised few-shot image classification and segmentation, by leveraging a Vision Transformer (ViT)
Our proposed method takes token representations from the self-supervised ViT and leverages their correlations, via self-attention, to produce classification and segmentation predictions.
Experiments on Pascal-5i and COCO-20i demonstrate significant performance gains in a variety of supervision settings.
arXiv Detail & Related papers (2023-07-07T06:16:43Z) - Combining Metric Learning and Attention Heads For Accurate and Efficient
Multilabel Image Classification [0.0]
We revisit two popular approaches to multilabel classification: transformer-based heads and labels relations information graph processing branches.
Although transformer-based heads are considered to achieve better results than graph-based branches, we argue that with the proper training strategy graph-based methods can demonstrate just a small accuracy drop.
arXiv Detail & Related papers (2022-09-14T12:06:47Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - Semantic-Aware Representation Blending for Multi-Label Image Recognition
with Partial Labels [86.17081952197788]
We propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels.
Experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors.
arXiv Detail & Related papers (2022-03-04T07:56:16Z) - DSNet: A Dual-Stream Framework for Weakly-Supervised Gigapixel Pathology
Image Analysis [78.78181964748144]
We present a novel weakly-supervised framework for classifying whole slide images (WSIs)
WSIs are commonly processed by patch-wise classification with patch-level labels.
With image-level labels only, patch-wise classification would be sub-optimal due to inconsistency between the patch appearance and image-level label.
arXiv Detail & Related papers (2021-09-13T09:10:43Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Learning from Pixel-Level Label Noise: A New Perspective for
Semi-Supervised Semantic Segmentation [12.937770890847819]
We propose a graph based label noise detection and correction framework to deal with pixel-level noisy labels.
In particular, for the generated pixel-level noisy labels from weak supervisions by Class Activation Map (CAM), we train a clean segmentation model with strong supervisions.
Finally, we adopt a superpixel-based graph to represent the relations of spatial adjacency and semantic similarity between pixels in one image.
arXiv Detail & Related papers (2021-03-26T03:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.