Data-driven models and computational tools for neurolinguistics: a
language technology perspective
- URL: http://arxiv.org/abs/2003.10540v1
- Date: Mon, 23 Mar 2020 20:41:51 GMT
- Title: Data-driven models and computational tools for neurolinguistics: a
language technology perspective
- Authors: Ekaterina Artemova and Amir Bakarov and Aleksey Artemov and Evgeny
Burnaev and Maxim Sharaev
- Abstract summary: We present a review of brain imaging-based neurolinguistic studies with a focus on the natural language representations.
Mutual enrichment of neurolinguistics and language technologies leads to development of brain-aware natural language representations.
- Score: 12.082438928980087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, our focus is the connection and influence of language
technologies on the research in neurolinguistics. We present a review of brain
imaging-based neurolinguistic studies with a focus on the natural language
representations, such as word embeddings and pre-trained language models.
Mutual enrichment of neurolinguistics and language technologies leads to
development of brain-aware natural language representations. The importance of
this research area is emphasized by medical applications.
Related papers
- Large Language Model-based FMRI Encoding of Language Functions for Subjects with Neurocognitive Disorder [53.575426835313536]
This paper explores language-related functional changes in older NCD adults using LLM-based fMRI encoding and brain scores.
We analyze the correlation between brain scores and cognitive scores at both whole-brain and language-related ROI levels.
Our findings reveal that higher cognitive abilities correspond to better brain scores, with correlations peaking in the middle temporal gyrus.
arXiv Detail & Related papers (2024-07-15T01:09:08Z) - Navigating Brain Language Representations: A Comparative Analysis of Neural Language Models and Psychologically Plausible Models [29.50162863143141]
We compare encoding performance of various neural language models and psychologically plausible models.
Surprisingly, our findings revealed that psychologically plausible models outperformed neural language models across diverse contexts.
arXiv Detail & Related papers (2024-04-30T08:48:07Z) - Tuning In to Neural Encoding: Linking Human Brain and Artificial
Supervised Representations of Language [31.636016502455693]
We generate supervised representations on eight Natural Language Understanding (NLU) tasks using prompt-tuning.
We demonstrate that prompt-tuning yields representations that better predict neural responses to Chinese stimuli than traditional fine-tuning.
arXiv Detail & Related papers (2023-10-05T06:31:01Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - Connecting Neural Response measurements & Computational Models of
language: a non-comprehensive guide [5.523143941738335]
Recent advances in language modelling and in neuroimaging promise potential improvements in the investigation of language's neurobiology.
This survey traces a line from early research linking Event Related Potentials and complexity measures derived from simple language models to contemporary studies employing Artificial Neural Network models trained on large corpora.
arXiv Detail & Related papers (2022-03-10T11:24:54Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Does injecting linguistic structure into language models lead to better
alignment with brain recordings? [13.880819301385854]
We evaluate whether language models align better with brain recordings if their attention is biased by annotations from syntactic or semantic formalisms.
Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain.
arXiv Detail & Related papers (2021-01-29T14:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.