Exploring Unsupervised Cell Recognition with Prior Self-activation Maps
- URL: http://arxiv.org/abs/2308.11144v1
- Date: Tue, 22 Aug 2023 02:54:42 GMT
- Title: Exploring Unsupervised Cell Recognition with Prior Self-activation Maps
- Authors: Pingyi Chen, Chenglu Zhu, Zhongyi Shui, Jiatong Cai, Sunyi Zheng,
Shichuan Zhang, Lin Yang
- Abstract summary: Prior self-activation maps (PSM) are proposed to generate pseudo masks as training targets.
We evaluated our method on two histological datasets: MoNuSeg (cell segmentation) and BCData (multi-class cell detection)
- Score: 5.746092401615179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of supervised deep learning models on cell recognition tasks
relies on detailed annotations. Many previous works have managed to reduce the
dependency on labels. However, considering the large number of cells contained
in a patch, costly and inefficient labeling is still inevitable. To this end,
we explored label-free methods for cell recognition. Prior self-activation maps
(PSM) are proposed to generate pseudo masks as training targets. To be
specific, an activation network is trained with self-supervised learning. The
gradient information in the shallow layers of the network is aggregated to
generate prior self-activation maps. Afterward, a semantic clustering module is
then introduced as a pipeline to transform PSMs to pixel-level semantic pseudo
masks for downstream tasks. We evaluated our method on two histological
datasets: MoNuSeg (cell segmentation) and BCData (multi-class cell detection).
Compared with other fully-supervised and weakly-supervised methods, our method
can achieve competitive performance without any manual annotations. Our simple
but effective framework can also achieve multi-class cell detection which can
not be done by existing unsupervised methods. The results show the potential of
PSMs that might inspire other research to deal with the hunger for labels in
medical area.
Related papers
- SelfAdapt: Unsupervised Domain Adaptation of Cell Segmentation Models [1.8485970721272897]
SelfAdapt is a method that enables the adaptation of pre-trained cell segmentation models without the need for labels.<n>We evaluate our method on the LiveCell and TissueNet datasets, demonstrating relative improvements in AP0.5 of up to 29.64% over baseline Cellpose.
arXiv Detail & Related papers (2025-08-15T11:31:48Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - Adaptive Self-Training for Object Detection [13.07105239116411]
We introduce our method Self-Training for Object Detection (ASTOD)
ASTOD determines without cost a threshold value based directly on the ground value of the score histogram.
We use different views of the unlabeled images during the pseudo-labeling step to reduce the number of missed predictions.
arXiv Detail & Related papers (2022-12-07T15:10:40Z) - Unsupervised Dense Nuclei Detection and Segmentation with Prior
Self-activation Map For Histology Images [5.3882963853819845]
We propose a self-supervised learning based approach with a Prior Self-activation Module (PSM)
PSM generates self-activation maps from the input images to avoid labeling costs and further produce pseudo masks for the downstream task.
Compared with other fully-supervised and weakly-supervised methods, our method can achieve competitive performance without any manual annotations.
arXiv Detail & Related papers (2022-10-14T14:34:26Z) - Inferring the Class Conditional Response Map for Weakly Supervised
Semantic Segmentation [27.269847900950943]
We propose a class-conditional inference strategy and an activation aware mask refinement loss function to generate better pseudo labels.
Our method achieves superior WSSS results without requiring re-training of the classifier.
arXiv Detail & Related papers (2021-10-27T09:43:40Z) - Leveraging Auxiliary Tasks with Affinity Learning for Weakly Supervised
Semantic Segmentation [88.49669148290306]
We propose a novel weakly supervised multi-task framework called AuxSegNet to leverage saliency detection and multi-label image classification as auxiliary tasks.
Inspired by their similar structured semantics, we also propose to learn a cross-task global pixel-level affinity map from the saliency and segmentation representations.
The learned cross-task affinity can be used to refine saliency predictions and propagate CAM maps to provide improved pseudo labels for both tasks.
arXiv Detail & Related papers (2021-07-25T11:39:58Z) - Cell Detection from Imperfect Annotation by Pseudo Label Selection Using
P-classification [9.080472817672264]
We propose a pseudo labeling approach for cell detection from imperfect annotated data.
A detection convolutional neural network (CNN) trained using such missing labeled data often produces over-detection.
Experiments using microscopy images for five different conditions demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-07-20T07:08:05Z) - Refining Pseudo Labels with Clustering Consensus over Generations for
Unsupervised Object Re-identification [84.72303377833732]
Unsupervised object re-identification targets at learning discriminative representations for object retrieval without any annotations.
We propose to estimate pseudo label similarities between consecutive training generations with clustering consensus and refine pseudo labels with temporally propagated and ensembled pseudo labels.
The proposed pseudo label refinery strategy is simple yet effective and can be seamlessly integrated into existing clustering-based unsupervised re-identification methods.
arXiv Detail & Related papers (2021-06-11T02:42:42Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Grid Cell Path Integration For Movement-Based Visual Object Recognition [0.0]
We show how grid cell-based path integration in a cortical network can support reliable recognition of objects given an arbitrary sequence of inputs.
Our network (GridCellNet) uses grid cell computations to integrate visual information and make predictions based on movements.
arXiv Detail & Related papers (2021-02-17T23:52:57Z) - Weakly-Supervised Saliency Detection via Salient Object Subitizing [57.17613373230722]
We introduce saliency subitizing as the weak supervision since it is class-agnostic.
This allows the supervision to be aligned with the property of saliency detection.
We conduct extensive experiments on five benchmark datasets.
arXiv Detail & Related papers (2021-01-04T12:51:45Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.