The Berkeley Single Cell Computational Microscopy (BSCCM) Dataset
- URL: http://arxiv.org/abs/2402.06191v1
- Date: Fri, 9 Feb 2024 05:10:53 GMT
- Title: The Berkeley Single Cell Computational Microscopy (BSCCM) Dataset
- Authors: Henry Pinkard, Cherry Liu, Fanice Nyatigo, Daniel A. Fletcher, Laura
Waller
- Abstract summary: We introduce the Berkeley Single Cell Computational Microscopy dataset.
This dataset contains over 12,000,000 images of 400,000 of individual white blood cells.
- Score: 1.53744306569115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational microscopy, in which hardware and algorithms of an imaging
system are jointly designed, shows promise for making imaging systems that cost
less, perform more robustly, and collect new types of information. Often, the
performance of computational imaging systems, especially those that incorporate
machine learning, is sample-dependent. Thus, standardized datasets are an
essential tool for comparing the performance of different approaches. Here, we
introduce the Berkeley Single Cell Computational Microscopy (BSCCM) dataset,
which contains over ~12,000,000 images of 400,000 of individual white blood
cells. The dataset contains images captured with multiple illumination patterns
on an LED array microscope and fluorescent measurements of the abundance of
surface proteins that mark different cell types. We hope this dataset will
provide a valuable resource for the development and testing of new algorithms
in computational microscopy and computer vision with practical biomedical
applications.
Related papers
- Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology [2.7280901660033643]
This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs)
Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks, achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases.
We develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time.
arXiv Detail & Related papers (2024-04-16T02:42:06Z) - Gravitational cell detection and tracking in fluorescence microscopy
data [0.18828620190012021]
We present a novel approach based on gravitational force fields that can compete with, and potentially outperform modern machine learning models.
This method includes detection, segmentation, and tracking elements, with the results demonstrated on a Cell Tracking Challenge dataset.
arXiv Detail & Related papers (2023-12-06T14:08:05Z) - The TYC Dataset for Understanding Instance-Level Semantics and Motions
of Cells in Microstructures [29.29348484938194]
trapped yeast cell (TYC) dataset is a novel dataset for understanding instance-level semantics and motions of cells in microstructures.
TYC offers ten times more instance annotations than the previously largest dataset, including cells and microstructures.
arXiv Detail & Related papers (2023-08-23T13:10:33Z) - Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images [68.8204255655161]
We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
arXiv Detail & Related papers (2023-04-19T13:45:28Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Increasing a microscope's effective field of view via overlapped imaging
and machine learning [4.23935174235373]
This work demonstrates a multi-lens microscopic imaging system that overlaps multiple independent fields of view on a single sensor for high-efficiency automated specimen analysis.
arXiv Detail & Related papers (2021-10-10T22:52:36Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Learning Guided Electron Microscopy with Active Acquisition [8.181540928891913]
We show how to use deep learning to accelerate and optimize single-beam SEM acquisition of images.
Our algorithm rapidly collects an information-lossy image and then applies a novel learning method to identify a small subset of pixels to be collected at higher resolution.
We demonstrate the efficacy of this novel technique for active acquisition by speeding up the task of collecting connectomic datasets for neurobiology by up to an order of magnitude.
arXiv Detail & Related papers (2021-01-07T20:03:16Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.