MTNeuro: A Benchmark for Evaluating Representations of Brain Structure
Across Multiple Levels of Abstraction
- URL: http://arxiv.org/abs/2301.00345v1
- Date: Sun, 1 Jan 2023 04:54:03 GMT
- Title: MTNeuro: A Benchmark for Evaluating Representations of Brain Structure
Across Multiple Levels of Abstraction
- Authors: Jorge Quesada (1), Lakshmi Sathidevi (1), Ran Liu (1), Nauman Ahad
(1), Joy M. Jackson (1), Mehdi Azabou (1), Jingyun Xiao (1), Christopher
Liding (1), Matthew Jin (1), Carolina Urzay (1), William Gray-Roncal (2),
Erik C. Johnson (2), Eva L. Dyer (1) ((1) Georgia Institute of Technology,
(2) Johns Hopkins University Applied Physics Laboratory)
- Abstract summary: In brain mapping, learning to automatically parse images to build representations of both small-scale features and global properties is a crucial and open challenge.
Our benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large section of mouse brain.
We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are multiple scales of abstraction from which we can describe the same
image, depending on whether we are focusing on fine-grained details or a more
global attribute of the image. In brain mapping, learning to automatically
parse images to build representations of both small-scale features (e.g., the
presence of cells or blood vessels) and global properties of an image (e.g.,
which brain region the image comes from) is a crucial and open challenge.
However, most existing datasets and benchmarks for neuroanatomy consider only a
single downstream task at a time. To bridge this gap, we introduce a new
dataset, annotations, and multiple downstream tasks that provide diverse ways
to readout information about brain structure and architecture from the same
image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric,
micrometer-resolution X-ray microtomography images spanning a large
thalamocortical section of mouse brain, encompassing multiple cortical and
subcortical regions. We generated a number of different prediction challenges
and evaluated several supervised and self-supervised models for brain-region
prediction and pixel-level semantic segmentation of microstructures. Our
experiments not only highlight the rich heterogeneity of this dataset, but also
provide insights into how self-supervised approaches can be used to learn
representations that capture multiple attributes of a single image and perform
well on a variety of downstream tasks. Datasets, code, and pre-trained baseline
models are provided at: https://mtneuro.github.io/ .
Related papers
- Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Multiclass Semantic Segmentation to Identify Anatomical Sub-Regions of
Brain and Measure Neuronal Health in Parkinson's Disease [2.288652563296735]
Currently, a machine learning model to analyze sub-anatomical regions of the brain to analyze 2D histological images is not available.
In this study, we trained our best fit model on approximately one thousand annotated 2D brain images stained with Nissl/ Haematoxylin and Tyrosine Hydroxylase enzyme (TH, indicator of dopaminergic neuron viability)
The model effectively is able to detect two sub-regions compacta (SNCD) and reticulata (SNr) in all the images.
arXiv Detail & Related papers (2023-01-07T19:35:28Z) - Semantic Brain Decoding: from fMRI to conceptually similar image
reconstruction of visual stimuli [0.29005223064604074]
We propose a novel approach to brain decoding that also relies on semantic and contextual similarity.
We employ an fMRI dataset of natural image vision and create a deep learning decoding pipeline inspired by the existence of both bottom-up and top-down processes in human vision.
We produce reconstructions of visual stimuli that match the original content very well on a semantic level, surpassing the state of the art in previous literature.
arXiv Detail & Related papers (2022-12-13T16:54:08Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Multi-level Second-order Few-shot Learning [111.0648869396828]
We propose a Multi-level Second-order (MlSo) few-shot learning network for supervised or unsupervised few-shot image classification and few-shot action recognition.
We leverage so-called power-normalized second-order base learner streams combined with features that express multiple levels of visual abstraction.
We demonstrate respectable results on standard datasets such as Omniglot, mini-ImageNet, tiered-ImageNet, Open MIC, fine-grained datasets such as CUB Birds, Stanford Dogs and Cars, and action recognition datasets such as HMDB51, UCF101, and mini-MIT.
arXiv Detail & Related papers (2022-01-15T19:49:00Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Microscopic fine-grained instance classification through deep attention [7.50282814989294]
Fine-grained classification of microscopic image data with limited samples is an open problem in computer vision and biomedical imaging.
We propose a simple yet effective deep network that performs two tasks simultaneously in an end-to-end manner.
The result is a robust but lightweight end-to-end trainable deep network that yields state-of-the-art results.
arXiv Detail & Related papers (2020-10-06T15:29:58Z) - Evolution of Image Segmentation using Deep Convolutional Neural Network:
A Survey [0.0]
We take a glance at the evolution of both semantic and instance segmentation work based on CNN.
We have given a glimpse of some state-of-the-art panoptic segmentation models.
arXiv Detail & Related papers (2020-01-13T06:07:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.