Cross-domain Microscopy Cell Counting by Disentangled Transfer Learning
- URL: http://arxiv.org/abs/2211.14638v2
- Date: Mon, 20 Mar 2023 03:08:09 GMT
- Title: Cross-domain Microscopy Cell Counting by Disentangled Transfer Learning
- Authors: Zuhui Wang
- Abstract summary: We propose a cross-domain cell counting approach that requires only weak human annotation efforts.
We use a public dataset consisting of synthetic cells as the source domain.
We transfer only the domain-agnostic knowledge to a new target domain of real cell images.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Microscopy images from different imaging conditions, organs, and tissues
often have numerous cells with various shapes on a range of backgrounds. As a
result, designing a deep learning model to count cells in a source domain
becomes precarious when transferring them to a new target domain. To address
this issue, manual annotation costs are typically the norm when training deep
learning-based cell counting models across different domains. In this paper, we
propose a cross-domain cell counting approach that requires only weak human
annotation efforts. Initially, we implement a cell counting network that
disentangles domain-specific knowledge from domain-agnostic knowledge in cell
images, where they pertain to the creation of domain style images and cell
density maps, respectively. We then devise an image synthesis technique capable
of generating massive synthetic images founded on a few target-domain images
that have been labeled. Finally, we use a public dataset consisting of
synthetic cells as the source domain, where no manual annotation cost is
present, to train our cell counting network; subsequently, we transfer only the
domain-agnostic knowledge to a new target domain of real cell images. By
progressively refining the trained model using synthesized target-domain images
and several real annotated ones, our proposed cross-domain cell counting method
achieves good performance compared to state-of-the-art techniques that rely on
fully annotated training images in the target domain. We evaluated the efficacy
of our cross-domain approach on two target domain datasets of actual microscopy
cells, demonstrating the feasibility of requiring annotations on only a few
images in a new domain.
Related papers
- Phrase Grounding-based Style Transfer for Single-Domain Generalized
Object Detection [109.58348694132091]
Single-domain generalized object detection aims to enhance a model's generalizability to multiple unseen target domains.
This is a practical yet challenging task as it requires the model to address domain shift without incorporating target domain data into training.
We propose a novel phrase grounding-based style transfer approach for the task.
arXiv Detail & Related papers (2024-02-02T10:48:43Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Few-shot Semantic Image Synthesis with Class Affinity Transfer [23.471210664024067]
We propose a transfer method that leverages a model trained on a large source dataset to improve the learning ability on small target datasets.
The class affinity matrix is introduced as a first layer to the source model to make it compatible with the target label maps.
We apply our approach to GAN-based and diffusion-based architectures for semantic synthesis.
arXiv Detail & Related papers (2023-04-05T09:24:45Z) - Domain-invariant Prototypes for Semantic Segmentation [30.932130453313537]
We present an easy-to-train framework that learns domain-invariant prototypes for domain adaptive semantic segmentation.
Our method involves only one-stage training and does not need to be trained on large-scale un-annotated target images.
arXiv Detail & Related papers (2022-08-12T02:21:05Z) - Unsupervised Domain Adaptation with Contrastive Learning for OCT
Segmentation [49.59567529191423]
We propose a novel semi-supervised learning framework for segmentation of volumetric images from new unlabeled domains.
We jointly use supervised and contrastive learning, also introducing a contrastive pairing scheme that leverages similarity between nearby slices in 3D.
arXiv Detail & Related papers (2022-03-07T19:02:26Z) - First steps on Gamification of Lung Fluid Cells Annotations in the
Flower Domain [6.470549137572311]
We propose an approach to gamify the task of annotating lung fluid cells from pathological whole slide images.
As this domain is unknown to non-expert annotators, we transform images of cells detected with a RetinaNet architecture to the domain of flower images.
In this more assessable domain, non-expert annotators can be (t)asked to annotate different kinds of flowers in a playful setting.
arXiv Detail & Related papers (2021-11-05T14:11:38Z) - Few-shot Image Generation via Cross-domain Correspondence [98.2263458153041]
Training generative models, such as GANs, on a target domain containing limited examples can easily result in overfitting.
In this work, we seek to utilize a large source domain for pretraining and transfer the diversity information from source to target.
To further reduce overfitting, we present an anchor-based strategy to encourage different levels of realism over different regions in the latent space.
arXiv Detail & Related papers (2021-04-13T17:59:35Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Few-Shot Microscopy Image Cell Segmentation [15.510258960276083]
Automatic cell segmentation in microscopy images works well with the support of deep neural networks trained with full supervision.
We propose the combination of three objective functions to segment the cells, move the segmentation results away from the classification boundary.
Our experiments on five public databases show promising results from 1- to 10-shot meta-learning.
arXiv Detail & Related papers (2020-06-29T12:12:10Z) - Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings [76.85673049332428]
Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning.
We propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately.
We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.
arXiv Detail & Related papers (2020-02-16T19:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.