Classification Beats Regression: Counting of Cells from Greyscale
Microscopic Images based on Annotation-free Training Samples
- URL: http://arxiv.org/abs/2010.14782v2
- Date: Thu, 29 Oct 2020 20:35:45 GMT
- Title: Classification Beats Regression: Counting of Cells from Greyscale
Microscopic Images based on Annotation-free Training Samples
- Authors: Xin Ding, Qiong Zhang, William J. Welch
- Abstract summary: This work proposes a supervised learning framework to count cells from greyscale microscopic images without using annotated training images.
We formulate the cell counting task as an image classification problem, where the cell counts are taken as class labels.
To deal with these limitations, we propose a simple but effective data augmentation (DA) method to synthesize images for the unseen cell counts.
- Score: 20.91256120719461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern methods often formulate the counting of cells from microscopic images
as a regression problem and more or less rely on expensive, manually annotated
training images (e.g., dot annotations indicating the centroids of cells or
segmentation masks identifying the contours of cells). This work proposes a
supervised learning framework based on classification-oriented convolutional
neural networks (CNNs) to count cells from greyscale microscopic images without
using annotated training images. In this framework, we formulate the cell
counting task as an image classification problem, where the cell counts are
taken as class labels. This formulation has its limitation when some cell
counts in the test stage do not appear in the training data. Moreover, the
ordinal relation among cell counts is not utilized. To deal with these
limitations, we propose a simple but effective data augmentation (DA) method to
synthesize images for the unseen cell counts. We also introduce an ensemble
method, which can not only moderate the influence of unseen cell counts but
also utilize the ordinal information to improve the prediction accuracy. This
framework outperforms many modern cell counting methods and won the data
analysis competition (Case Study 1: Counting Cells From Microscopic Images
https://ssc.ca/en/case-study/case-study-1-counting-cells-microscopic-images) of
the 47th Annual Meeting of the Statistical Society of Canada (SSC). Our code is
available at https://github.com/anno2020/CellCount_TinyBBBC005.
Related papers
- Cell as Point: One-Stage Framework for Efficient Cell Tracking [54.19259129722988]
This paper proposes the novel end-to-end CAP framework to achieve efficient and stable cell tracking in one stage.
CAP abandons detection or segmentation stages and simplifies the process by exploiting the correlation among the trajectories of cell points to track cells jointly.
Cap demonstrates strong cell tracking performance while also being 10 to 55 times more efficient than existing methods.
arXiv Detail & Related papers (2024-11-22T10:16:35Z) - IDCIA: Immunocytochemistry Dataset for Cellular Image Analysis [0.5057850174013127]
We present a new annotated microscopic cellular image dataset to improve the effectiveness of machine learning methods for cellular image analysis.
Our dataset includes microscopic images of cells, and for each image, the cell count and the location of individual cells.
arXiv Detail & Related papers (2024-11-13T19:33:08Z) - DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - Multi-stream Cell Segmentation with Low-level Cues for Multi-modality
Images [66.79688768141814]
We develop an automatic cell classification pipeline to label microscopy images.
We then train a classification model based on the category labels.
We deploy two types of segmentation models to segment cells with roundish and irregular shapes.
arXiv Detail & Related papers (2023-10-22T08:11:08Z) - Seamless Iterative Semi-Supervised Correction of Imperfect Labels in
Microscopy Images [57.42492501915773]
In-vitro tests are an alternative to animal testing for the toxicity of medical devices.
Human fatigue plays a role in error making, making the use of deep learning appealing.
We propose Seamless Iterative Semi-Supervised correction of Imperfect labels (SISSI)
Our method successfully provides an adaptive early learning correction technique for object detection.
arXiv Detail & Related papers (2022-08-05T18:52:20Z) - Edge-Based Self-Supervision for Semi-Supervised Few-Shot Microscopy
Image Cell Segmentation [16.94384366469512]
We propose the prediction of edge-based maps for self-supervising the training of the unlabelled images.
In our experiments, we show that only a small number of annotated images, e.g. 10% of the original training set, is enough for our approach to reach similar performance as with the fully annotated databases on 1- to 10-shots.
arXiv Detail & Related papers (2022-08-03T14:35:00Z) - EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context
Aware Pseudocoloring [4.555723508665994]
We introduce a small convolutional neural network (CNN) for volumetric cell segmentation.
Our model is efficient and has an asymmetric encoder-decoder structure with very few parameters in the decoder.
Our method achieves top-ranking results, while our CNN model has an up to 25x lower number of parameters than other top-ranking methods.
arXiv Detail & Related papers (2022-04-06T18:02:15Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Split and Expand: An inference-time improvement for Weakly Supervised
Cell Instance Segmentation [71.50526869670716]
We propose a two-step post-processing procedure, Split and Expand, to improve the conversion of segmentation maps to instances.
In the Split step, we split clumps of cells from the segmentation map into individual cell instances with the guidance of cell-center predictions.
In the Expand step, we find missing small cells using the cell-center predictions.
arXiv Detail & Related papers (2020-07-21T14:05:09Z) - Learning to segment clustered amoeboid cells from brightfield microscopy
via multi-task learning with adaptive weight selection [6.836162272841265]
We introduce a novel supervised technique for cell segmentation in a multi-task learning paradigm.
A combination of a multi-task loss, based on the region and cell boundary detection, is employed for an improved prediction efficiency of the network.
We observe an overall Dice score of 0.93 on the validation set, which is an improvement of over 15.9% on a recent unsupervised method, and outperforms the popular supervised U-net algorithm by at least $5.8%$ on average.
arXiv Detail & Related papers (2020-05-19T11:31:53Z) - Cell Segmentation and Tracking using CNN-Based Distance Predictions and
a Graph-Based Matching Strategy [0.20999222360659608]
We present a method for the segmentation of touching cells in microscopy images.
By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process.
This representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types.
arXiv Detail & Related papers (2020-04-03T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.