CytoCLIP: Learning Cytoarchitectural Characteristics in Developing Human Brain Using Contrastive Language Image Pre-Training
- URL: http://arxiv.org/abs/2601.12282v1
- Date: Sun, 18 Jan 2026 06:42:24 GMT
- Title: CytoCLIP: Learning Cytoarchitectural Characteristics in Developing Human Brain Using Contrastive Language Image Pre-Training
- Authors: Pralaypati Ta, Sriram Venkatesaperumal, Keerthi Ram, Mohanasankar Sivaprakasam,
- Abstract summary: CytoCLIP is a suite of vision-language models derived from pre-trained Contrastive Language-Image Pre-Training frameworks.<n>It comprises two model variants: one is trained using low-resolution whole-region images to understand the overall cytoarchitectural pattern of an area.<n>It includes 86 distinct regions for low-resolution images and 384 brain regions for high-resolution tiles.
- Score: 2.802396222178503
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The functions of different regions of the human brain are closely linked to their distinct cytoarchitecture, which is defined by the spatial arrangement and morphology of the cells. Identifying brain regions by their cytoarchitecture enables various scientific analyses of the brain. However, delineating these areas manually in brain histological sections is time-consuming and requires specialized knowledge. An automated approach is necessary to minimize the effort needed from human experts. To address this, we propose CytoCLIP, a suite of vision-language models derived from pre-trained Contrastive Language-Image Pre-Training (CLIP) frameworks to learn joint visual-text representations of brain cytoarchitecture. CytoCLIP comprises two model variants: one is trained using low-resolution whole-region images to understand the overall cytoarchitectural pattern of an area, and the other is trained on high-resolution image tiles for detailed cellular-level representation. The training dataset is created from NISSL-stained histological sections of developing fetal brains of different gestational weeks. It includes 86 distinct regions for low-resolution images and 384 brain regions for high-resolution tiles. We evaluate the model's understanding of the cytoarchitecture and generalization ability using region classification and cross-modal retrieval tasks. Multiple experiments are performed under various data setups, including data from samples of different ages and sectioning planes. Experimental results demonstrate that CytoCLIP outperforms existing methods. It achieves an F1 score of 0.87 for whole-region classification and 0.91 for high-resolution image tile classification.
Related papers
- SPATIA: Multimodal Model for Prediction and Generation of Spatial Cell Phenotypes [39.45743286683448]
We introduce SPATIA, a multi-scale generative and predictive model for spatial transcriptomics.<n> SPATIA learns cell-level embeddings by fusing image-derived morphological tokens and transcriptomic vector tokens.<n>We benchmark SPATIA against 13 existing models across 12 individual tasks.
arXiv Detail & Related papers (2025-07-07T06:54:02Z) - CSBrain: A Cross-scale Spatiotemporal Brain Foundation Model for EEG Decoding [57.90382885533593]
We propose a Cross-scale Spatiotemporal Brain foundation model for generalized decoding EEG signals.<n>We show that CSBrain consistently outperforms task-specific and foundation model baselines.<n>These results establish cross-scale modeling as a key inductive bias and position CSBrain as a robust backbone for future brain-AI research.
arXiv Detail & Related papers (2025-06-29T03:29:34Z) - CISCA and CytoDArk0: a Cell Instance Segmentation and Classification method for histo(patho)logical image Analyses and a new, open, Nissl-stained dataset for brain cytoarchitecture studies [0.19791587637442667]
We propose a new deep learning framework, CISCA, for automatic cell instance segmentation and classification in histological slices.<n>At the core of CISCA is a network architecture featuring a lightweight U-Net with three heads in the decoder.<n>We evaluate CISCA against other state-of-the-art methods, demonstrating its versatility, robustness, and accuracy in segmenting and classifying cells.
arXiv Detail & Related papers (2024-09-06T10:34:06Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - NCIS: Deep Color Gradient Maps Regression and Three-Class Pixel
Classification for Enhanced Neuronal Cell Instance Segmentation in
Nissl-Stained Histological Images [0.5273938705774914]
This paper presents an end-to-end framework to automatically segment single neuronal cells in Nissl-stained histological images of the brain.
A U-Net-like architecture with an EfficientNet as the encoder and two decoding branches is exploited to regress four gradient color maps and classify pixels into contours between touching cells, cell bodies, or background.
The method was tested on images of the cerebral cortex and cerebellum, outperforming other recent deep-learning-based approaches for the instance segmentation of cells.
arXiv Detail & Related papers (2023-06-27T20:22:04Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Multiclass Semantic Segmentation to Identify Anatomical Sub-Regions of
Brain and Measure Neuronal Health in Parkinson's Disease [2.288652563296735]
Currently, a machine learning model to analyze sub-anatomical regions of the brain to analyze 2D histological images is not available.
In this study, we trained our best fit model on approximately one thousand annotated 2D brain images stained with Nissl/ Haematoxylin and Tyrosine Hydroxylase enzyme (TH, indicator of dopaminergic neuron viability)
The model effectively is able to detect two sub-regions compacta (SNCD) and reticulata (SNr) in all the images.
arXiv Detail & Related papers (2023-01-07T19:35:28Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Contrastive Representation Learning for Whole Brain Cytoarchitectonic
Mapping in Histological Human Brain Sections [0.4588028371034407]
We propose a contrastive learning objective for encoding microscopic image patches into robust microstructural features.
We show that a model pre-trained using this learning task outperforms a model trained from scratch, as well as a model pre-trained on a recently proposed auxiliary task.
arXiv Detail & Related papers (2020-11-25T16:44:23Z) - Convolutional Neural Networks for cytoarchitectonic brain mapping at
large scale [0.33727511459109777]
We present a new workflow for mapping cytoarchitectonic areas in large series of cell-body stained histological sections of human postmortem brains.
It is based on a Deep Convolutional Neural Network (CNN), which is trained on a pair of section images with annotations, with a large number of un-annotated sections in between.
The new workflow does not require preceding 3D-reconstruction of sections, and is robust against histological artefacts.
arXiv Detail & Related papers (2020-11-25T16:25:13Z) - Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge [53.48285637256203]
iSeg 2019 challenge provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods.
By the time of writing, there are 30 automatic segmentation methods participating in iSeg 2019.
We review the 8 top-ranked teams by detailing their pipelines/implementations, presenting experimental results and evaluating performance in terms of the whole brain, regions of interest, and gyral landmark curves.
arXiv Detail & Related papers (2020-07-04T13:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.