Large language models as linguistic simulators and cognitive models in human research
- URL: http://arxiv.org/abs/2402.04470v4
- Date: Sun, 20 Oct 2024 16:13:29 GMT
- Title: Large language models as linguistic simulators and cognitive models in human research
- Authors: Zhicheng Lin,
- Abstract summary: The rise of large language models (LLMs) that generate human-like text has sparked debates over their potential to replace human participants in behavioral and cognitive research.
We critically evaluate this replacement perspective to appraise the fundamental utility of language models in psychology and social science.
This perspective reframes the role of language models in behavioral and cognitive science, serving as linguistic simulators and cognitive models that shed light on the similarities and differences between machine intelligence and human cognition and thoughts.
- Score: 0.0
- License:
- Abstract: The rise of large language models (LLMs) that generate human-like text has sparked debates over their potential to replace human participants in behavioral and cognitive research. We critically evaluate this replacement perspective to appraise the fundamental utility of language models in psychology and social science. Through a five-dimension framework, characterization, representation, interpretation, implication, and utility, we identify six fallacies that undermine the replacement perspective: (1) equating token prediction with human intelligence, (2) assuming LLMs represent the average human, (3) interpreting alignment as explanation, (4) anthropomorphizing AI, (5) essentializing identities, and (6) purporting LLMs as primary tools that directly reveal the human mind. Rather than replacement, the evidence and arguments are consistent with a simulation perspective, where LLMs offer a new paradigm to simulate roles and model cognitive processes. We highlight limitations and considerations about internal, external, construct, and statistical validity, providing methodological guidelines for effective integration of LLMs into psychological research, with a focus on model selection, prompt design, interpretation, and ethical considerations. This perspective reframes the role of language models in behavioral and cognitive science, serving as linguistic simulators and cognitive models that shed light on the similarities and differences between machine intelligence and human cognition and thoughts.
Related papers
- Thinking beyond the anthropomorphic paradigm benefits LLM research [1.7392902719515677]
We analyze hundreds of thousands of computer science research articles from the past decade.
We present empirical evidence of the prevalence and growth of anthropomorphic terminology in research on large language models (LLMs)
We argue these conceptualizations may be limiting, and that challenging them opens up new pathways for understanding and improving LLMs beyond human analogies.
arXiv Detail & Related papers (2025-02-13T11:32:09Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
We investigated the emergence of human-like conceptual representations within large language models (LLMs)
We found that LLMs were able to infer concepts from definitional descriptions and construct representation spaces that converge towards a shared, context-independent structure.
Our work supports the view that LLMs serve as valuable tools for understanding complex human cognition and paves the way for better alignment between artificial and human intelligence.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Humanlike Cognitive Patterns as Emergent Phenomena in Large Language Models [2.9312156642007294]
We systematically review Large Language Models' capabilities across three important cognitive domains: decision-making biases, reasoning, and creativity.
On decision-making, our synthesis reveals that while LLMs demonstrate several human-like biases, some biases observed in humans are absent.
On reasoning, advanced LLMs like GPT-4 exhibit deliberative reasoning akin to human System-2 thinking, while smaller models fall short of human-level performance.
A distinct dichotomy emerges in creativity: while LLMs excel in language-based creative tasks, such as storytelling, they struggle with divergent thinking tasks that require real-world context.
arXiv Detail & Related papers (2024-12-20T02:26:56Z) - Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning [49.60849499134362]
This study investigates the linguistic understanding of Large Language Models (LLMs) regarding signifier (form) and signified (meaning)
Traditional psycholinguistic evaluations often reflect statistical biases that may misrepresent LLMs' true linguistic capabilities.
We introduce a neurolinguistic approach, utilizing a novel method that combines minimal pair and diagnostic probing to analyze activation patterns across model layers.
arXiv Detail & Related papers (2024-11-12T04:16:44Z) - Cross-lingual Speech Emotion Recognition: Humans vs. Self-Supervised Models [16.0617753653454]
This study presents a comparative analysis between human performance and SSL models.
We also compare the SER ability of models and humans at both utterance- and segment-levels.
Our findings reveal that models, with appropriate knowledge transfer, can adapt to the target language and achieve performance comparable to native speakers.
arXiv Detail & Related papers (2024-09-25T13:27:17Z) - Theoretical and Methodological Framework for Studying Texts Produced by Large Language Models [0.0]
This paper addresses the conceptual, methodological and technical challenges in studying large language models (LLMs)
It builds on a theoretical framework that distinguishes between the LLM as a substrate and the entities the model simulates.
arXiv Detail & Related papers (2024-08-29T17:34:10Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Human Simulacra: Benchmarking the Personification of Large Language Models [38.21708264569801]
Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence.
This paper introduces a framework for constructing virtual characters' life stories from the ground up.
Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters.
arXiv Detail & Related papers (2024-02-28T09:11:14Z) - Divergences between Language Models and Human Brains [59.100552839650774]
We systematically explore the divergences between human and machine language processing.
We identify two domains that LMs do not capture well: social/emotional intelligence and physical commonsense.
Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.
arXiv Detail & Related papers (2023-11-15T19:02:40Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.