Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot
Segment Anything Model for Molecular-empowered Learning
- URL: http://arxiv.org/abs/2308.05785v1
- Date: Thu, 10 Aug 2023 16:44:24 GMT
- Title: Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot
Segment Anything Model for Molecular-empowered Learning
- Authors: Xueyuan Li, Ruining Deng, Yucheng Tang, Shunxing Bao, Haichun Yang,
Yuankai Huo
- Abstract summary: Building an AI model requires pixel-level annotations, which are often unscalable and must be done by skilled domain experts.
In this paper, we explore the potential of bypassing pixel-level delineation by employing the recent segment anything model (SAM) on weak box annotation.
Our findings show that the proposed SAM-assisted molecular-empowered learning (SAM-L) can diminish the labeling efforts for lay annotators by only requiring weak box annotations.
- Score: 4.722512095568422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Precise identification of multiple cell classes in high-resolution Giga-pixel
whole slide imaging (WSI) is critical for various clinical scenarios. Building
an AI model for this purpose typically requires pixel-level annotations, which
are often unscalable and must be done by skilled domain experts (e.g.,
pathologists). However, these annotations can be prone to errors, especially
when distinguishing between intricate cell types (e.g., podocytes and mesangial
cells) using only visual inspection. Interestingly, a recent study showed that
lay annotators, when using extra immunofluorescence (IF) images for reference
(referred to as molecular-empowered learning), can sometimes outperform domain
experts in labeling. Despite this, the resource-intensive task of manual
delineation remains a necessity during the annotation process. In this paper,
we explore the potential of bypassing pixel-level delineation by employing the
recent segment anything model (SAM) on weak box annotation in a zero-shot
learning approach. Specifically, we harness SAM's ability to produce
pixel-level annotations from box annotations and utilize these SAM-generated
labels to train a segmentation model. Our findings show that the proposed
SAM-assisted molecular-empowered learning (SAM-L) can diminish the labeling
efforts for lay annotators by only requiring weak box annotations. This is
achieved without compromising annotation accuracy or the performance of the
deep learning-based segmentation. This research represents a significant
advancement in democratizing the annotation process for training pathological
image segmentation, relying solely on non-expert annotators.
Related papers
- DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Medical Image Segmentation with SAM-generated Annotations [12.432602118806573]
We evaluate the performance of the Segment Anything Model (SAM) as an annotation tool for medical data.
We generate so-called "pseudo labels" on the Medical Decathlon (MSD) computed tomography (CT) tasks.
The pseudo labels are then used in place of ground truth labels to train a UNet model in a weakly-supervised manner.
arXiv Detail & Related papers (2024-09-30T12:43:20Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Learning with minimal effort: leveraging in silico labeling for cell and
nucleus segmentation [0.6465251961564605]
We propose to use In Silico Labeling (ISL) as a pretraining scheme for segmentation tasks.
By comparing segmentation performance across several training set sizes, we show that such a scheme can dramatically reduce the number of required annotations.
arXiv Detail & Related papers (2023-01-10T11:35:14Z) - Semi-Supervised and Self-Supervised Collaborative Learning for Prostate
3D MR Image Segmentation [8.527048567343234]
Volumetric magnetic resonance (MR) image segmentation plays an important role in many clinical applications.
Deep learning (DL) has recently achieved state-of-the-art or even human-level performance on various image segmentation tasks.
In this work, we aim to train a semi-supervised and self-supervised collaborative learning framework for prostate 3D MR image segmentation.
arXiv Detail & Related papers (2022-11-16T11:40:13Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Deep Active Learning for Joint Classification & Segmentation with Weak
Annotator [22.271760669551817]
CNN visualization and interpretation methods, like class-activation maps (CAMs), are typically used to highlight the image regions linked to class predictions.
We propose an active learning framework, which progressively integrates pixel-level annotations during training.
Our results indicate that, by simply using random sample selection, the proposed approach can significantly outperform state-of-the-art CAMs and AL methods.
arXiv Detail & Related papers (2020-10-10T03:25:54Z) - Attention Model Enhanced Network for Classification of Breast Cancer
Image [54.83246945407568]
AMEN is formulated in a multi-branch fashion with pixel-wised attention model and classification submodular.
To focus more on subtle detail information, the sample image is enhanced by the pixel-wised attention map generated from former branch.
Experiments conducted on three benchmark datasets demonstrate the superiority of the proposed method under various scenarios.
arXiv Detail & Related papers (2020-10-07T08:44:21Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Manifold-driven Attention Maps for Weakly Supervised Segmentation [9.289524646688244]
We propose a manifold driven attention-based network to enhance visual salient regions.
Our method generates superior attention maps directly during inference without the need of extra computations.
arXiv Detail & Related papers (2020-04-07T00:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.