Coupling Visual Semantics of Artificial Neural Networks and Human Brain
Function via Synchronized Activations
- URL: http://arxiv.org/abs/2206.10821v1
- Date: Wed, 22 Jun 2022 03:32:17 GMT
- Title: Coupling Visual Semantics of Artificial Neural Networks and Human Brain
Function via Synchronized Activations
- Authors: Lin Zhao, Haixing Dai, Zihao Wu, Zhenxiang Xiao, Lu Zhang, David
Weizhong Liu, Xintao Hu, Xi Jiang, Sheng Li, Dajiang Zhu, Tianming Liu
- Abstract summary: We propose a novel computational framework, Synchronized Activations (Sync-ACT) to couple the visual representation spaces and semantics between ANNs and BNNs.
With this approach, we are able to semantically annotate the neurons in ANNs with biologically meaningful description derived from human brain imaging.
- Score: 13.956089436100106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial neural networks (ANNs), originally inspired by biological neural
networks (BNNs), have achieved remarkable successes in many tasks such as
visual representation learning. However, whether there exists semantic
correlations/connections between the visual representations in ANNs and those
in BNNs remains largely unexplored due to both the lack of an effective tool to
link and couple two different domains, and the lack of a general and effective
framework of representing the visual semantics in BNNs such as human functional
brain networks (FBNs). To answer this question, we propose a novel
computational framework, Synchronized Activations (Sync-ACT), to couple the
visual representation spaces and semantics between ANNs and BNNs in human brain
based on naturalistic functional magnetic resonance imaging (nfMRI) data. With
this approach, we are able to semantically annotate the neurons in ANNs with
biologically meaningful description derived from human brain imaging for the
first time. We evaluated the Sync-ACT framework on two publicly available
movie-watching nfMRI datasets. The experiments demonstrate a) the significant
correlation and similarity of the semantics between the visual representations
in FBNs and those in a variety of convolutional neural networks (CNNs) models;
b) the close relationship between CNN's visual representation similarity to
BNNs and its performance in image classification tasks. Overall, our study
introduces a general and effective paradigm to couple the ANNs and BNNs and
provides novel insights for future studies such as brain-inspired artificial
intelligence.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Modelling Multimodal Integration in Human Concept Processing with Vision-and-Language Models [7.511284868070148]
There is growing evidence that human meaning representations integrate linguistic and sensory-motor information.
We investigate whether the integration of multimodal information leads to representations that are more aligned with human brain activity.
Our results reveal that VLM representations correlate more strongly than language- and vision-only DNNs with activations in brain areas functionally related to language processing.
arXiv Detail & Related papers (2024-07-25T10:08:37Z) - Co-learning synaptic delays, weights and adaptation in spiking neural
networks [0.0]
Spiking neural networks (SNN) distinguish themselves from artificial neural networks (ANN) because of their inherent temporal processing and spike-based computations.
We show that data processing with spiking neurons can be enhanced by co-learning the connection weights with two other biologically inspired neuronal features.
arXiv Detail & Related papers (2023-09-12T09:13:26Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Coupling Artificial Neurons in BERT and Biological Neurons in the Human
Brain [9.916033214833407]
This study introduces a novel, general, and effective framework to link transformer-based NLP models and neural activities in response to language.
Our experimental results demonstrate 1) The activations of ANs and BNs are significantly synchronized; 2) the ANs carry meaningful linguistic/semantic information and anchor to their BN signatures; 3) the anchored BNs are interpretable in a neurolinguistic context.
arXiv Detail & Related papers (2023-03-27T01:41:48Z) - Predicting Brain Age using Transferable coVariance Neural Networks [119.45320143101381]
We have recently studied covariance neural networks (VNNs) that operate on sample covariance matrices.
In this paper, we demonstrate the utility of VNNs in inferring brain age using cortical thickness data.
Our results show that VNNs exhibit multi-scale and multi-site transferability for inferring brain age
In the context of brain age in Alzheimer's disease (AD), our experiments show that i) VNN outputs are interpretable as brain age predicted using VNNs is significantly elevated for AD with respect to healthy subjects.
arXiv Detail & Related papers (2022-10-28T18:58:34Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Deep Auto-encoder with Neural Response [8.797970797884023]
We propose a hybrid model, called deep auto-encoder with the neural response (DAE-NR)
The DAE-NR incorporates the information from the visual cortex into ANNs to achieve better image reconstruction and higher neural representation similarity between biological and artificial neurons.
Our experiments demonstrate that if and only if with the joint learning, DAE-NRs can (i.e., improve the performance of image reconstruction) and (ii. increase the representational similarity between biological neurons and artificial neurons.
arXiv Detail & Related papers (2021-11-30T11:44:17Z) - Joint Embedding of Structural and Functional Brain Networks with Graph
Neural Networks for Mental Illness Diagnosis [17.48272758284748]
Graph Neural Networks (GNNs) have become a de facto model for analyzing graph-structured data.
We develop a novel multiview GNN for multimodal brain networks.
In particular, we regard each modality as a view for brain networks and employ contrastive learning for multimodal fusion.
arXiv Detail & Related papers (2021-07-07T13:49:57Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.