Divergences between Language Models and Human Brains
- URL: http://arxiv.org/abs/2311.09308v2
- Date: Mon, 5 Feb 2024 02:21:59 GMT
- Title: Divergences between Language Models and Human Brains
- Authors: Yuchen Zhou, Emmy Liu, Graham Neubig, Michael J. Tarr, Leila Wehbe
- Abstract summary: Recent research has hinted that brain signals can be effectively predicted using internal representations of language models (LMs)
We show that there are clear differences in how LMs and humans represent and use language.
We identify two domains that are not captured well by LMs: social/emotional intelligence and physical commonsense.
- Score: 63.405788999891335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Do machines and humans process language in similar ways? Recent research has
hinted in the affirmative, finding that brain signals can be effectively
predicted using the internal representations of language models (LMs). Although
such results are thought to reflect shared computational principles between LMs
and human brains, there are also clear differences in how LMs and humans
represent and use language. In this work, we systematically explore the
divergences between human and machine language processing by examining the
differences between LM representations and human brain responses to language as
measured by Magnetoencephalography (MEG) across two datasets in which subjects
read and listened to narrative stories. Using a data-driven approach, we
identify two domains that are not captured well by LMs: social/emotional
intelligence and physical commonsense. We then validate these domains with
human behavioral experiments and show that fine-tuning LMs on these domains can
improve their alignment with human brain responses.
Related papers
- Lost in Translation: The Algorithmic Gap Between LMs and the Brain [8.799971499357499]
Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear.
This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis.
We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models.
arXiv Detail & Related papers (2024-07-05T17:43:16Z) - Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network [16.317199232071232]
Large Language Models (LLMs) have been shown to be effective models of the human language system.
In this work, we investigate the key architectural components driving the surprising alignment of untrained models.
arXiv Detail & Related papers (2024-06-21T12:54:03Z) - Do Large Language Models Mirror Cognitive Language Processing? [43.68923267228057]
Large Language Models (LLMs) have demonstrated remarkable abilities in text comprehension and logical reasoning.
In cognitive science, brain cognitive processing signals are typically utilized to study human language processing.
We employ Representational Similarity Analysis (RSA) to measure the alignment between 23 mainstream LLMs and fMRI signals of the brain.
arXiv Detail & Related papers (2024-02-28T03:38:20Z) - Instruction-tuning Aligns LLMs to the Human Brain [19.450164922129723]
We investigate the effect of instruction-tuning on aligning large language models and human language processing mechanisms.
We find that instruction-tuning generally enhances brain alignment, but has no similar effect on behavioral alignment.
Our results suggest that the mechanisms that encode world knowledge in LLMs also improve representational alignment to the human brain.
arXiv Detail & Related papers (2023-12-01T13:31:02Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - Information-Restricted Neural Language Models Reveal Different Brain
Regions' Sensitivity to Semantics, Syntax and Context [87.31930367845125]
We trained a lexical language model, Glove, and a supra-lexical language model, GPT-2, on a text corpus.
We then assessed to what extent these information-restricted models were able to predict the time-courses of fMRI signal of humans listening to naturalistic text.
Our analyses show that, while most brain regions involved in language are sensitive to both syntactic and semantic variables, the relative magnitudes of these effects vary a lot across these regions.
arXiv Detail & Related papers (2023-02-28T08:16:18Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - Model-based analysis of brain activity reveals the hierarchy of language
in 305 subjects [82.81964713263483]
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli.
Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli.
arXiv Detail & Related papers (2021-10-12T15:30:21Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Presentation and Analysis of a Multimodal Dataset for Grounded Language
Learning [32.28310581819443]
Grounded language acquisition involves learning how language-based interactions refer to the world around them.
In practice the data used for learning tends to be cleaner, clearer, and more grammatical than actual human interactions.
We present a dataset of common household objects described by people using either spoken or written language.
arXiv Detail & Related papers (2020-07-29T17:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.