SelfAdapt: Unsupervised Domain Adaptation of Cell Segmentation Models
- URL: http://arxiv.org/abs/2508.11411v1
- Date: Fri, 15 Aug 2025 11:31:48 GMT
- Title: SelfAdapt: Unsupervised Domain Adaptation of Cell Segmentation Models
- Authors: Fabian H. Reith, Jannik Franzen, Dinesh R. Palli, J. Lorenz Rumberger, Dagmar Kainmueller,
- Abstract summary: SelfAdapt is a method that enables the adaptation of pre-trained cell segmentation models without the need for labels.<n>We evaluate our method on the LiveCell and TissueNet datasets, demonstrating relative improvements in AP0.5 of up to 29.64% over baseline Cellpose.
- Score: 1.8485970721272897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have become the go-to method for biomedical instance segmentation. Generalist models like Cellpose demonstrate state-of-the-art performance across diverse cellular data, though their effectiveness often degrades on domains that differ from their training data. While supervised fine-tuning can address this limitation, it requires annotated data that may not be readily available. We propose SelfAdapt, a method that enables the adaptation of pre-trained cell segmentation models without the need for labels. Our approach builds upon student-teacher augmentation consistency training, introducing L2-SP regularization and label-free stopping criteria. We evaluate our method on the LiveCell and TissueNet datasets, demonstrating relative improvements in AP0.5 of up to 29.64% over baseline Cellpose. Additionally, we show that our unsupervised adaptation can further improve models that were previously fine-tuned with supervision. We release SelfAdapt as an easy-to-use extension of the Cellpose framework. The code for our method is publicly available at https: //github.com/Kainmueller-Lab/self_adapt.
Related papers
- Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models [52.502867924372275]
Mixture-of-Experts (MoE) models achieve efficient scaling through sparse expert activation, but often suffer from suboptimal routing decisions due to distribution shifts in deployment.<n>We propose textita data-free, online test-time framework that continuously adapts MoE routing decisions during text generation without external supervision or data.
arXiv Detail & Related papers (2025-10-16T16:24:36Z) - CellViT++: Energy-Efficient and Adaptive Cell Segmentation and Classification Using Foundation Models [1.7674154313605157]
$textCellViTscriptstyle ++$ is a framework for generalized cell segmentation in digital pathology.<n>$textCellViTscriptstyle ++$ is an open-source framework featuring a user-friendly, web-based interface for visualization and annotation.
arXiv Detail & Related papers (2025-01-09T14:26:50Z) - SelfReplay: Adapting Self-Supervised Sensory Models via Adaptive Meta-Task Replay [22.59061034805928]
Self-supervised learning has emerged as a method for utilizing massive unlabeled data for pre-training models.<n>We investigate the performance degradation that occurs when self-supervised models are fine-tuned in heterogeneous domains.<n>We propose SelfReplay, a few-shot domain adaptation framework for personalizing self-supervised models.
arXiv Detail & Related papers (2024-03-29T08:48:07Z) - Few-shot adaptation for morphology-independent cell instance
segmentation [3.6064695344878093]
We show how to adapt a cell instance segmentation model to adapt to very challenging bacteria datasets.
Our results show a significant boost in accuracy after adaptation to very challenging bacteria datasets.
arXiv Detail & Related papers (2024-02-27T02:54:22Z) - Exploring Unsupervised Cell Recognition with Prior Self-activation Maps [5.746092401615179]
Prior self-activation maps (PSM) are proposed to generate pseudo masks as training targets.
We evaluated our method on two histological datasets: MoNuSeg (cell segmentation) and BCData (multi-class cell detection)
arXiv Detail & Related papers (2023-08-22T02:54:42Z) - Point-supervised Single-cell Segmentation via Collaborative Knowledge
Sharing [0.0]
This paper focuses on a weakly-supervised training setting for single-cell segmentation models.
Of more interest is a proposed selflearning method called collaborative knowledge sharing.
This strategy achieves selflearning by sharing knowledge between a principal model and a very lightweight collaborator model.
arXiv Detail & Related papers (2023-04-20T23:22:41Z) - Iterative Loop Learning Combining Self-Training and Active Learning for
Domain Adaptive Semantic Segmentation [1.827510863075184]
Self-training and active learning have been proposed to alleviate this problem.
This paper proposes an iterative loop learning method combining Self-Training and Active Learning.
arXiv Detail & Related papers (2023-01-31T01:31:43Z) - Self-Distillation for Further Pre-training of Transformers [83.84227016847096]
We propose self-distillation as a regularization for a further pre-training stage.
We empirically validate the efficacy of self-distillation on a variety of benchmark datasets for image and text classification tasks.
arXiv Detail & Related papers (2022-09-30T02:25:12Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.