The Sensorium competition on predicting large-scale mouse primary visual
cortex activity
- URL: http://arxiv.org/abs/2206.08666v1
- Date: Fri, 17 Jun 2022 10:09:57 GMT
- Title: The Sensorium competition on predicting large-scale mouse primary visual
cortex activity
- Authors: Konstantin F. Willeke (1 and 2 and 3), Paul G. Fahey (4 and 5),
Mohammad Bashiri (1 and 2 and 3), Laura Pede (3), Max F. Burg (1 and 2 and 3
and 6), Christoph Blessing (3), Santiago A. Cadena (1 and 3 and 6), Zhiwei
Ding (4 and 5), Konstantin-Klemens Lurz (1 and 2 and 3), Kayla Ponder (4 and
5), Taliah Muhammad (4 and 5), Saumil S. Patel (4 and 5), Alexander S. Ecker
(3 and 7), Andreas S. Tolias (4 and 5 and 8), Fabian H. Sinz (2 and 3 and 4
and 5) ((1) International Max Planck Research School for Intelligent Systems,
University of Tuebingen, Germany, (2) Institute for Bioinformatics and
Medical Informatics, University of Tuebingen, Germany (3) Institute of
Computer Science and Campus Institute Data Science, University of Goettingen,
Germany, (4) Department of Neuroscience, Baylor College of Medicine, Houston,
USA, (5) Center for Neuroscience and Artificial Intelligence, Baylor College
of Medicine, Houston, USA, (6) Institute for Theoretical Physics, University
of Tuebingen, Germany, (7) Max Planck Institute for Dynamics and
Self-Organization, Goettingen, Germany, (8) Electrical and Computer
Engineering, Rice University, Houston, USA)
- Abstract summary: We propose the Sensorium benchmark competition to identify state-of-the-art models of the mouse visual system.
We collected a large-scale dataset from mouse primary visual cortex containing the responses of more than 28,000 neurons.
The benchmark challenge will rank models based on predictive performance for neuronal responses on a held-out test set.
- Score: 28.272130531998936
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The neural underpinning of the biological visual system is challenging to
study experimentally, in particular as the neuronal activity becomes
increasingly nonlinear with respect to visual input. Artificial neural networks
(ANNs) can serve a variety of goals for improving our understanding of this
complex system, not only serving as predictive digital twins of sensory cortex
for novel hypothesis generation in silico, but also incorporating bio-inspired
architectural motifs to progressively bridge the gap between biological and
machine vision. The mouse has recently emerged as a popular model system to
study visual information processing, but no standardized large-scale benchmark
to identify state-of-the-art models of the mouse visual system has been
established. To fill this gap, we propose the Sensorium benchmark competition.
We collected a large-scale dataset from mouse primary visual cortex containing
the responses of more than 28,000 neurons across seven mice stimulated with
thousands of natural images, together with simultaneous behavioral measurements
that include running speed, pupil dilation, and eye movements. The benchmark
challenge will rank models based on predictive performance for neuronal
responses on a held-out test set, and includes two tracks for model input
limited to either stimulus only (Sensorium) or stimulus plus behavior
(Sensorium+). We provide a starting kit to lower the barrier for entry,
including tutorials, pre-trained baseline models, and APIs with one line
commands for data loading and submission. We would like to see this as a
starting point for regular challenges and data releases, and as a standard tool
for measuring progress in large-scale neural system identification models of
the mouse visual system and beyond.
Related papers
- Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - WaLiN-GUI: a graphical and auditory tool for neuron-based encoding [73.88751967207419]
Neuromorphic computing relies on spike-based, energy-efficient communication.
We develop a tool to identify suitable configurations for neuron-based encoding of sample-based data into spike trains.
The WaLiN-GUI is provided open source and with documentation.
arXiv Detail & Related papers (2023-10-25T20:34:08Z) - NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics [10.294767093317404]
We introduce NeuroGraph, a collection of graph-based neuroimaging datasets.
We demonstrate its utility for predicting multiple categories of behavioral and cognitive traits.
arXiv Detail & Related papers (2023-06-09T19:10:16Z) - V1T: large-scale mouse V1 response prediction using a Vision Transformer [1.5703073293718952]
We introduce V1T, a novel Vision Transformer based architecture that learns a shared visual and behavioral representation across animals.
We evaluate our model on two large datasets recorded from mouse primary visual cortex and outperform previous convolution-based models by more than 12.7% in prediction performance.
arXiv Detail & Related papers (2023-02-06T18:58:38Z) - Perceptual-Score: A Psychophysical Measure for Assessing the Biological
Plausibility of Visual Recognition Models [9.902669518047714]
This article proposes a new metric, Perceptual-Score, which is grounded in visual psychophysics.
We perform the procedure on twelve models that vary in degree of biological inspiration and complexity.
Each model's Perceptual-Score is compared against the state-of-the-art neural activity-based metric, Brain-Score.
arXiv Detail & Related papers (2022-10-16T20:34:26Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.