Coupling Artificial Neurons in BERT and Biological Neurons in the Human
Brain
- URL: http://arxiv.org/abs/2303.14871v1
- Date: Mon, 27 Mar 2023 01:41:48 GMT
- Title: Coupling Artificial Neurons in BERT and Biological Neurons in the Human
Brain
- Authors: Xu Liu, Mengyue Zhou, Gaosheng Shi, Yu Du, Lin Zhao, Zihao Wu, David
Liu, Tianming Liu, Xintao Hu
- Abstract summary: This study introduces a novel, general, and effective framework to link transformer-based NLP models and neural activities in response to language.
Our experimental results demonstrate 1) The activations of ANs and BNs are significantly synchronized; 2) the ANs carry meaningful linguistic/semantic information and anchor to their BN signatures; 3) the anchored BNs are interpretable in a neurolinguistic context.
- Score: 9.916033214833407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Linking computational natural language processing (NLP) models and neural
responses to language in the human brain on the one hand facilitates the effort
towards disentangling the neural representations underpinning language
perception, on the other hand provides neurolinguistics evidence to evaluate
and improve NLP models. Mappings of an NLP model's representations of and the
brain activities evoked by linguistic input are typically deployed to reveal
this symbiosis. However, two critical problems limit its advancement: 1) The
model's representations (artificial neurons, ANs) rely on layer-level
embeddings and thus lack fine-granularity; 2) The brain activities (biological
neurons, BNs) are limited to neural recordings of isolated cortical unit (i.e.,
voxel/region) and thus lack integrations and interactions among brain
functions. To address those problems, in this study, we 1) define ANs with
fine-granularity in transformer-based NLP models (BERT in this study) and
measure their temporal activations to input text sequences; 2) define BNs as
functional brain networks (FBNs) extracted from functional magnetic resonance
imaging (fMRI) data to capture functional interactions in the brain; 3) couple
ANs and BNs by maximizing the synchronization of their temporal activations.
Our experimental results demonstrate 1) The activations of ANs and BNs are
significantly synchronized; 2) the ANs carry meaningful linguistic/semantic
information and anchor to their BN signatures; 3) the anchored BNs are
interpretable in a neurolinguistic context. Overall, our study introduces a
novel, general, and effective framework to link transformer-based NLP models
and neural activities in response to language and may provide novel insights
for future studies such as brain-inspired evaluation and development of NLP
models.
Related papers
- Large Language Model-based FMRI Encoding of Language Functions for Subjects with Neurocognitive Disorder [53.575426835313536]
This paper explores language-related functional changes in older NCD adults using LLM-based fMRI encoding and brain scores.
We analyze the correlation between brain scores and cognitive scores at both whole-brain and language-related ROI levels.
Our findings reveal that higher cognitive abilities correspond to better brain scores, with correlations peaking in the middle temporal gyrus.
arXiv Detail & Related papers (2024-07-15T01:09:08Z) - Enhancing learning in artificial neural networks through cellular heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Neural Erosion: Emulating Controlled Neurodegeneration and Aging in AI Systems [5.720259826430462]
We use IQ tests performed by Large Language Models (LLMs) to introduce the concept of neural erosion"
This deliberate erosion involves ablating synapses or neurons, or adding Gaussian noise during or after training, resulting in a controlled progressive decline in the LLMs' performance.
To the best of our knowledge, this is the first work that models neurodegeneration with text data, compared to other works that operate in the computer vision domain.
arXiv Detail & Related papers (2024-03-15T18:00:00Z) - Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks [102.75954614946258]
Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline.
NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age.
NeuroVNN adds anatomical interpretability to brain age and has a scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas.
arXiv Detail & Related papers (2024-02-12T14:46:31Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Coupling Visual Semantics of Artificial Neural Networks and Human Brain
Function via Synchronized Activations [13.956089436100106]
We propose a novel computational framework, Synchronized Activations (Sync-ACT) to couple the visual representation spaces and semantics between ANNs and BNNs.
With this approach, we are able to semantically annotate the neurons in ANNs with biologically meaningful description derived from human brain imaging.
arXiv Detail & Related papers (2022-06-22T03:32:17Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.