Transformer based multiple instance learning for weakly supervised
histopathology image segmentation
- URL: http://arxiv.org/abs/2205.08878v1
- Date: Wed, 18 May 2022 12:04:26 GMT
- Title: Transformer based multiple instance learning for weakly supervised
histopathology image segmentation
- Authors: Ziniu Qian, Kailu Li, Maode Lai, Eric I-Chao Chang, Bingzheng Wei,
Yubo Fan, Yan Xu
- Abstract summary: We propose a novel weakly supervised method for pixel-level segmentation in histopathology images.
The Transformer establishes the relationship between instances, which solves the shortcoming that instances are independent of each other in MIL.
Deep supervision is introduced to overcome the limitation of annotations in weakly supervised methods.
- Score: 7.449646821160063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hispathological image segmentation algorithms play a critical role in
computer aided diagnosis technology. The development of weakly supervised
segmentation algorithm alleviates the problem of medical image annotation that
it is time-consuming and labor-intensive. As a subset of weakly supervised
learning, Multiple Instance Learning (MIL) has been proven to be effective in
segmentation. However, there is a lack of related information between instances
in MIL, which limits the further improvement of segmentation performance. In
this paper, we propose a novel weakly supervised method for pixel-level
segmentation in histopathology images, which introduces Transformer into the
MIL framework to capture global or long-range dependencies. The multi-head
self-attention in the Transformer establishes the relationship between
instances, which solves the shortcoming that instances are independent of each
other in MIL. In addition, deep supervision is introduced to overcome the
limitation of annotations in weakly supervised methods and make the better
utilization of hierarchical information. The state-of-the-art results on the
colon cancer dataset demonstrate the superiority of the proposed method
compared with other weakly supervised methods. It is worth believing that there
is a potential of our approach for various applications in medical images.
Related papers
- Enhancing Weakly-Supervised Histopathology Image Segmentation with Knowledge Distillation on MIL-Based Pseudo-Labels [8.934328206473456]
We propose a novel distillation framework for histopathology image segmentation.
This framework introduces a iterative fusion-knowledge distillation strategy, enabling the student model to learn directly from the teacher's comprehensive outcomes.
arXiv Detail & Related papers (2024-07-14T17:15:47Z) - Improving Vision Anomaly Detection with the Guidance of Language
Modality [64.53005837237754]
This paper tackles the challenges for vision modality from a multimodal point of view.
We propose Cross-modal Guidance (CMG) to tackle the redundant information issue and sparse space issue.
To learn a more compact latent space for the vision anomaly detector, CMLE learns a correlation structure matrix from the language modality.
arXiv Detail & Related papers (2023-10-04T13:44:56Z) - Self-supervised Semantic Segmentation: Consistency over Transformation [3.485615723221064]
We propose a novel self-supervised algorithm, textbfS$3$-Net, which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules.
We leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition.
Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches.
arXiv Detail & Related papers (2023-08-31T21:28:46Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - MSCDA: Multi-level Semantic-guided Contrast Improves Unsupervised Domain
Adaptation for Breast MRI Segmentation in Small Datasets [5.272836235045653]
We propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation framework.
Our approach incorporates self-training with contrastive learning to align feature representations between domains.
In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts.
arXiv Detail & Related papers (2023-01-04T19:16:55Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - TransAttUnet: Multi-level Attention-guided U-Net with Transformer for
Medical Image Segmentation [33.45471457058221]
This paper proposes a novel Transformer based medical image semantic segmentation framework called TransAttUnet.
In particular, we establish additional multi-scale skip connections between decoder blocks to aggregate the different semantic-scale upsampling features.
Our method consistently outperforms the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T09:17:06Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Towards Robust Partially Supervised Multi-Structure Medical Image
Segmentation on Small-Scale Data [123.03252888189546]
We propose Vicinal Labels Under Uncertainty (VLUU) to bridge the methodological gaps in partially supervised learning (PSL) under data scarcity.
Motivated by multi-task learning and vicinal risk minimization, VLUU transforms the partially supervised problem into a fully supervised problem by generating vicinal labels.
Our research suggests a new research direction in label-efficient deep learning with partial supervision.
arXiv Detail & Related papers (2020-11-28T16:31:00Z) - Self-supervised Equivariant Attention Mechanism for Weakly Supervised
Semantic Segmentation [93.83369981759996]
We propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap.
Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation.
We propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning.
arXiv Detail & Related papers (2020-04-09T14:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.