Context Limitations Make Neural Language Models More Human-Like
- URL: http://arxiv.org/abs/2205.11463v1
- Date: Mon, 23 May 2022 17:01:13 GMT
- Title: Context Limitations Make Neural Language Models More Human-Like
- Authors: Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui
- Abstract summary: We show discrepancies in context access between modern neural language models (LMs) and humans in incremental sentence processing.
Additional context limitation was needed to make LMs better simulate human reading behavior.
Our analyses also showed that human-LM gaps in memory access are associated with specific syntactic constructions.
- Score: 32.488137777336036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Do modern natural language processing (NLP) models exhibit human-like
language processing? How can they be made more human-like? These questions are
motivated by psycholinguistic studies for understanding human language
processing as well as engineering efforts. In this study, we demonstrate the
discrepancies in context access between modern neural language models (LMs) and
humans in incremental sentence processing. Additional context limitation was
needed to make LMs better simulate human reading behavior. Our analyses also
showed that human-LM gaps in memory access are associated with specific
syntactic constructions; incorporating additional syntactic factors into LMs'
context access could enhance their cognitive plausibility.
Related papers
- From Babbling to Fluency: Evaluating the Evolution of Language Models in Terms of Human Language Acquisition [6.617999710257379]
We propose a three-stage framework to assess the abilities of LMs.
We evaluate the generative capacities of LMs using methods from linguistic research.
arXiv Detail & Related papers (2024-10-17T06:31:49Z) - Lost in Translation: The Algorithmic Gap Between LMs and the Brain [8.799971499357499]
Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear.
This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis.
We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models.
arXiv Detail & Related papers (2024-07-05T17:43:16Z) - Divergences between Language Models and Human Brains [63.405788999891335]
Recent research has hinted that brain signals can be effectively predicted using internal representations of language models (LMs)
We show that there are clear differences in how LMs and humans represent and use language.
We identify two domains that are not captured well by LMs: social/emotional intelligence and physical commonsense.
arXiv Detail & Related papers (2023-11-15T19:02:40Z) - Visual Grounding Helps Learn Word Meanings in Low-Data Regimes [47.7950860342515]
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension.
But to achieve these results, LMs must be trained in distinctly un-human-like ways.
Do models trained more naturalistically -- with grounded supervision -- exhibit more humanlike language learning?
We investigate this question in the context of word learning, a key sub-task in language acquisition.
arXiv Detail & Related papers (2023-10-20T03:33:36Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Does Vision Accelerate Hierarchical Generalization in Neural Language Learners? [32.9355090864485]
This study explores the advantage of grounded language acquisition, specifically the impact of visual information on syntactic generalization in Neural language models (LMs)
Our experiments show that if the alignments between the linguistic and visual components are clear in the input, access to vision data does help with the syntactic generalization of LMs, but if not, visual input does not help.
This highlights the need for additional biases or signals, such as mutual gaze, to enhance cross-modal alignment and enable efficient syntactic generalization in multimodal LMs.
arXiv Detail & Related papers (2023-02-01T18:53:42Z) - Towards a neural architecture of language: Deep learning versus
logistics of access in neural architectures for compositional processing [0.0]
GPT and brain language processing mechanisms are fundamentally different.
They do not possess the logistics of access needed for compositional and productive human language processing.
Investigating learning methods could reveal how 'learned cognition' as found in deep learning could develop in the brain.
arXiv Detail & Related papers (2022-10-19T13:31:26Z) - Transparency Helps Reveal When Language Models Learn Meaning [71.96920839263457]
Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations, both autoregressive and masked language models learn to emulate semantic relations between expressions.
Turning to natural language, our experiments with a specific phenomenon -- referential opacity -- add to the growing body of evidence that current language models do not well-represent natural language semantics.
arXiv Detail & Related papers (2022-10-14T02:35:19Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Towards Abstract Relational Learning in Human Robot Interaction [73.67226556788498]
Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
arXiv Detail & Related papers (2020-11-20T12:06:46Z) - Discourse structure interacts with reference but not syntax in neural
language models [17.995905582226463]
We study the ability of language models (LMs) to learn interactions between different linguistic representations.
We find that, contrary to humans, implicit causality only influences LM behavior for reference, not syntax.
Our results suggest that LM behavior can contradict not only learned representations of discourse but also syntactic agreement.
arXiv Detail & Related papers (2020-10-10T03:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.