End-to-end topographic networks as models of cortical map formation and
human visual behaviour: moving beyond convolutions
- URL: http://arxiv.org/abs/2308.09431v1
- Date: Fri, 18 Aug 2023 10:03:51 GMT
- Title: End-to-end topographic networks as models of cortical map formation and
human visual behaviour: moving beyond convolutions
- Authors: Zejin Lu, Adrien Doerig, Victoria Bosch, Bas Krahmer, Daniel Kaiser,
Radoslaw M Cichy, Tim C Kietzmann
- Abstract summary: We develop All-Topographic Neural Networks (All-TNNs) to model the organisation of the primate visual system.
We show that All-TNNs significantly better align with human behaviour than previous state-of-the-art convolutional models due to their topographic nature.
All-TNNs thereby mark an important step forward in understanding the spatial organisation of the visual brain and how it mediates visual behaviour.
- Score: 0.29687381456164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational models are an essential tool for understanding the origin and
functions of the topographic organisation of the primate visual system. Yet,
vision is most commonly modelled by convolutional neural networks that ignore
topography by learning identical features across space. Here, we overcome this
limitation by developing All-Topographic Neural Networks (All-TNNs). Trained on
visual input, several features of primate topography emerge in All-TNNs: smooth
orientation maps and cortical magnification in their first layer, and
category-selective areas in their final layer. In addition, we introduce a
novel dataset of human spatial biases in object recognition, which enables us
to directly link models to behaviour. We demonstrate that All-TNNs
significantly better align with human behaviour than previous state-of-the-art
convolutional models due to their topographic nature. All-TNNs thereby mark an
important step forward in understanding the spatial organisation of the visual
brain and how it mediates visual behaviour.
Related papers
- TDSNNs: Competitive Topographic Deep Spiking Neural Networks for Visual Cortex Modeling [1.732019193517103]
We propose a novel Spatio-Temporal Constraints loss function for topographic deep spiking neural networks (SNNs)<n>Our results show that STC effectively generates representative topographic features across simulated visual cortical areas.<n>We also reveal that topographic organization facilitates efficient and stable temporal information processing via the spike mechanism in TDSNNs.
arXiv Detail & Related papers (2025-08-06T09:53:39Z) - Language Knowledge-Assisted Representation Learning for Skeleton-Based
Action Recognition [71.35205097460124]
How humans understand and recognize the actions of others is a complex neuroscientific problem.
LA-GCN proposes a graph convolution network using large-scale language models (LLM) knowledge assistance.
arXiv Detail & Related papers (2023-05-21T08:29:16Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Introducing topography in convolutional neural networks [11.595591429581546]
We propose a new topographic inductive bias in Convolutional Neural Networks (CNNs)
We benchmarked our new method on 4 datasets and 3 models in vision and audio tasks.
Our approach provides a new avenue to obtain models that are more memory efficient while maintaining better accuracy.
arXiv Detail & Related papers (2022-10-28T13:20:31Z) - Guiding Visual Attention in Deep Convolutional Neural Networks Based on
Human Eye Movements [0.0]
Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision.
Recent advances in deep learning seem to decrease this similarity.
We investigate a purely data-driven approach to obtain useful models.
arXiv Detail & Related papers (2022-06-21T17:59:23Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Self-Supervised Graph Representation Learning for Neuronal Morphologies [75.38832711445421]
We present GraphDINO, a data-driven approach to learn low-dimensional representations of 3D neuronal morphologies from unlabeled datasets.
We show, in two different species and across multiple brain areas, that this method yields morphological cell type clusterings on par with manual feature-based classification by experts.
Our method could potentially enable data-driven discovery of novel morphological features and cell types in large-scale datasets.
arXiv Detail & Related papers (2021-12-23T12:17:47Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.