Tuning In to Neural Encoding: Linking Human Brain and Artificial
Supervised Representations of Language
- URL: http://arxiv.org/abs/2310.04460v1
- Date: Thu, 5 Oct 2023 06:31:01 GMT
- Title: Tuning In to Neural Encoding: Linking Human Brain and Artificial
Supervised Representations of Language
- Authors: Jingyuan Sun, Xiaohan Zhang and Marie-Francine Moens
- Abstract summary: We generate supervised representations on eight Natural Language Understanding (NLU) tasks using prompt-tuning.
We demonstrate that prompt-tuning yields representations that better predict neural responses to Chinese stimuli than traditional fine-tuning.
- Score: 31.636016502455693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To understand the algorithm that supports the human brain's language
representation, previous research has attempted to predict neural responses to
linguistic stimuli using embeddings generated by artificial neural networks
(ANNs), a process known as neural encoding. However, most of these studies have
focused on probing neural representations of Germanic languages, such as
English, with unsupervised ANNs. In this paper, we propose to bridge the gap
between human brain and supervised ANN representations of the Chinese language.
Specifically, we investigate how task tuning influences a pretained Transformer
for neural encoding and which tasks lead to the best encoding performances. We
generate supervised representations on eight Natural Language Understanding
(NLU) tasks using prompt-tuning, a technique that is seldom explored in neural
encoding for language. We demonstrate that prompt-tuning yields representations
that better predict neural responses to Chinese stimuli than traditional
fine-tuning on four tasks. Furthermore, we discover that tasks that require a
fine-grained processing of concepts and entities lead to representations that
are most predictive of brain activation patterns. Additionally, we reveal that
the proportion of tuned parameters highly influences the neural encoding
performance of fine-tuned models. Overall, our experimental findings could help
us better understand the relationship between supervised artificial and brain
language representations.
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Decoding Linguistic Representations of Human Brain [21.090956290947275]
We present a taxonomy of brain-to-language decoding of both textual and speech formats.
This work integrates two types of research: neuroscience focusing on language understanding and deep learning-based brain decoding.
arXiv Detail & Related papers (2024-07-30T07:55:44Z) - Language Reconstruction with Brain Predictive Coding from fMRI Data [28.217967547268216]
Theory of predictive coding suggests that human brain naturally engages in continuously predicting future word representations.
textscPredFT achieves current state-of-the-art decoding performance with a maximum BLEU-1 score of $27.8%$.
arXiv Detail & Related papers (2024-05-19T16:06:02Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Fine-tuned vs. Prompt-tuned Supervised Representations: Which Better
Account for Brain Language Representations? [30.495681024162835]
We compare prompt-tuned and fine-tuned representations in neural decoding.
We find that a more brain-consistent tuning method yields representations that better correlate with brain data.
This indicates that our brain encodes more fine-grained concept information than shallow syntactic information.
arXiv Detail & Related papers (2023-10-03T07:34:30Z) - BrainBERT: Self-supervised representation learning for intracranial
recordings [18.52962864519609]
We create a reusable Transformer, BrainBERT, for intracranial recordings bringing modern representation learning approaches to neuroscience.
Much like in NLP and speech recognition, this Transformer enables classifying complex concepts, with higher accuracy and with much less data.
In the future, far more concepts will be decodable from neural recordings by using representation learning, potentially unlocking the brain like language models unlocked language.
arXiv Detail & Related papers (2023-02-28T07:40:37Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.