Self-Supervised Graph Representation Learning for Neuronal Morphologies
- URL: http://arxiv.org/abs/2112.12482v3
- Date: Wed, 21 Jun 2023 08:25:30 GMT
- Title: Self-Supervised Graph Representation Learning for Neuronal Morphologies
- Authors: Marissa A. Weis, Laura Hansel, Timo L\"uddecke, Alexander S. Ecker
- Abstract summary: We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
- Score: 75.38832711445421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised graph representation learning has recently gained interest in
several application domains such as neuroscience, where modeling the diverse
morphology of cell types in the brain is one of the key challenges. It is
currently unknown how many excitatory cortical cell types exist and what their
defining morphological features are. Here we present GraphDINO, a purely
data-driven approach to learn low-dimensional representations of 3D neuronal
morphologies from unlabeled large-scale datasets. GraphDINO is a novel
transformer-based representation learning method for spatially-embedded graphs.
To enable self-supervised learning on transformers, we (1) developed data
augmentation strategies for spatially-embedded graphs, (2) adapted the
positional encoding and (3) introduced a novel attention mechanism,
AC-Attention, which combines attention-based global interaction between nodes
and classic graph convolutional processing. We show, in two different species
and across multiple brain areas, that this method yields morphological cell
type clusterings that are on par with manual feature-based classification by
experts, but without using prior knowledge about the structural features of
neurons. Moreover, it outperforms previous approaches on quantitative
benchmarks predicting expert labels. Our method could potentially enable
data-driven discovery of novel morphological features and cell types in
large-scale datasets. It is applicable beyond neuroscience in settings where
samples in a dataset are graphs and graph-level embeddings are desired.
Related papers
- Revealing Cortical Layers In Histological Brain Images With
Self-Supervised Graph Convolutional Networks Applied To Cell-Graphs [0.20971479389679332]
We introduce a self-supervised approach to detect layers in 2D Nissl-stained histological slices of the cerebral cortex.
A self-supervised graph convolutional network generates cell embeddings that encode morphological and structural traits of the cellular environment.
arXiv Detail & Related papers (2023-11-26T10:33:36Z) - NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics [9.803179588247252]
We introduce NeuroGraph, a collection of graph-based neuroimaging datasets.
We demonstrate its utility for predicting multiple categories of behavioral and cognitive traits.
arXiv Detail & Related papers (2023-06-09T19:10:16Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Hyperbolic Graph Neural Networks: A Review of Methods and Applications [55.5502008501764]
Graph neural networks generalize conventional neural networks to graph-structured data.
The performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry.
Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution.
arXiv Detail & Related papers (2022-02-28T15:08:48Z) - Neuroplastic graph attention networks for nuclei segmentation in
histopathology images [17.30043617044508]
We propose a novel architecture for semantic segmentation of cell nuclei.
The architecture is comprised of a novel neuroplastic graph attention network.
In experimental evaluation, our framework outperforms ensembles of state-of-the-art neural networks.
arXiv Detail & Related papers (2022-01-10T22:19:14Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Whole Brain Vessel Graphs: A Dataset and Benchmark for Graph Learning
and Neuroscience (VesselGraph) [3.846749674808336]
We present an extendable dataset of whole-brain vessel graphs based on specific imaging protocols.
We benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification.
Our work paves a path towards advancing graph learning research into the field of neuroscience.
arXiv Detail & Related papers (2021-08-30T13:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.