Fluorescent Neuronal Cells v2: Multi-Task, Multi-Format Annotations for
Deep Learning in Microscopy
- URL: http://arxiv.org/abs/2307.14243v1
- Date: Wed, 26 Jul 2023 15:14:10 GMT
- Title: Fluorescent Neuronal Cells v2: Multi-Task, Multi-Format Annotations for
Deep Learning in Microscopy
- Authors: Luca Clissa, Antonio Macaluso, Roberto Morelli, Alessandra Occhinegro,
Emiliana Piscitiello, Ludovico Taddei, Marco Luppi, Roberto Amici, Matteo
Cerri, Timna Hitrec, Lorenzo Rinaldi, Antonio Zoccoli
- Abstract summary: This dataset encompasses three image collections in which rodent neuronal cells' nuclei and cytoplasm are stained with diverse markers.
Alongside the images, we provide ground-truth annotations for several learning tasks, including semantic segmentation, object detection, and counting.
- Score: 44.62475518267084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fluorescent Neuronal Cells v2 is a collection of fluorescence microscopy
images and the corresponding ground-truth annotations, designed to foster
innovative research in the domains of Life Sciences and Deep Learning. This
dataset encompasses three image collections in which rodent neuronal cells'
nuclei and cytoplasm are stained with diverse markers to highlight their
anatomical or functional characteristics. Alongside the images, we provide
ground-truth annotations for several learning tasks, including semantic
segmentation, object detection, and counting. The contribution is two-fold.
First, given the variety of annotations and their accessible formats, we
envision our work facilitating methodological advancements in computer vision
approaches for segmentation, detection, feature learning, unsupervised and
self-supervised learning, transfer learning, and related areas. Second, by
enabling extensive exploration and benchmarking, we hope Fluorescent Neuronal
Cells v2 will catalyze breakthroughs in fluorescence microscopy analysis and
promote cutting-edge discoveries in life sciences. The data are available at:
https://amsacta.unibo.it/id/eprint/7347
Related papers
- DiffKillR: Killing and Recreating Diffeomorphisms for Cell Annotation in Dense Microscopy Images [105.46086313858062]
We introduce DiffKillR, a novel framework that reframes cell annotation as the combination of archetype matching and image registration tasks.
We will discuss the theoretical properties of DiffKillR and validate it on three microscopy tasks, demonstrating its advantages over existing supervised, semi-supervised, and unsupervised methods.
arXiv Detail & Related papers (2024-10-04T00:38:29Z) - ViKL: A Mammography Interpretation Framework via Multimodal Aggregation of Visual-knowledge-linguistic Features [54.37042005469384]
We announce MVKL, the first multimodal mammography dataset encompassing multi-view images, detailed manifestations and reports.
Based on this dataset, we focus on the challanging task of unsupervised pretraining.
We propose ViKL, a framework that synergizes Visual, Knowledge, and Linguistic features.
arXiv Detail & Related papers (2024-09-24T05:01:23Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Gravitational cell detection and tracking in fluorescence microscopy
data [0.18828620190012021]
We present a novel approach based on gravitational force fields that can compete with, and potentially outperform modern machine learning models.
This method includes detection, segmentation, and tracking elements, with the results demonstrated on a Cell Tracking Challenge dataset.
arXiv Detail & Related papers (2023-12-06T14:08:05Z) - BriFiSeg: a deep learning-based method for semantic and instance
segmentation of nuclei in brightfield images [0.0]
Non-stained brightfield images can be acquired on any microscope from both live or fixed samples.
Nuclei semantic segmentation from brightfield images was obtained, on four distinct cell lines.
Two distinct and effective strategies were employed for instance segmentation.
arXiv Detail & Related papers (2022-11-06T10:03:04Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in
Multivariate Image Data [0.0]
Scope2Screen is a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images.
Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of cells.
We present interactive lensing techniques that operate at single-cell and tissue levels.
arXiv Detail & Related papers (2021-10-10T18:34:13Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z) - Modality Attention and Sampling Enables Deep Learning with Heterogeneous
Marker Combinations in Fluorescence Microscopy [5.334932400937323]
Fluorescence microscopy allows for a detailed inspection of cells, cellular networks, and anatomical landmarks by staining with a variety of carefully-selected markers visualized as color channels.
Despite the success of deep learning methods in other vision applications, their potential for fluorescence image analysis remains underexploited.
We propose Marker Sampling and Excite, a neural network approach with a modality sampling strategy and a novel attention module.
arXiv Detail & Related papers (2020-08-27T21:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.