Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges
- URL: http://arxiv.org/abs/2409.02387v5
- Date: Mon, 18 Nov 2024 03:17:32 GMT
- Title: Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges
- Authors: Qian Niu, Junyu Liu, Ziqian Bi, Pohsun Feng, Benji Peng, Keyu Chen, Ming Li, Lawrence KQ Yan, Yichao Zhang, Caitlyn Heqi Yin, Cheng Fei, Tianyang Wang, Yunze Wang, Silin Chen,
- Abstract summary: This comprehensive review explores the intersection of Large Language Models (LLMs) and cognitive science.
We analyze methods for evaluating LLMs cognitive abilities and discuss their potential as cognitive models.
We assess cognitive biases and limitations of LLMs, along with proposed methods for improving their performance.
- Score: 12.390859712280324
- License:
- Abstract: This comprehensive review explores the intersection of Large Language Models (LLMs) and cognitive science, examining similarities and differences between LLMs and human cognitive processes. We analyze methods for evaluating LLMs cognitive abilities and discuss their potential as cognitive models. The review covers applications of LLMs in various cognitive fields, highlighting insights gained for cognitive science research. We assess cognitive biases and limitations of LLMs, along with proposed methods for improving their performance. The integration of LLMs with cognitive architectures is examined, revealing promising avenues for enhancing artificial intelligence (AI) capabilities. Key challenges and future research directions are identified, emphasizing the need for continued refinement of LLMs to better align with human cognition. This review provides a balanced perspective on the current state and future potential of LLMs in advancing our understanding of both artificial and human intelligence.
Related papers
- A Survey on Human-Centric LLMs [11.49752599240738]
Large language models (LLMs) can simulate human cognition and behavior.
This survey focuses on their performance in both individual tasks and collective tasks.
arXiv Detail & Related papers (2024-11-20T12:34:44Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - CogLM: Tracking Cognitive Development of Large Language Models [20.138831477848615]
We construct a benchmark CogLM based on Piaget's Theory of Cognitive Development.
CogLM comprises 1,220 questions spanning 10 cognitive abilities crafted by more than 20 human experts.
We find that human-like cognitive abilities have emerged in advanced LLMs (GPT-4), comparable to those of a 20-year-old human.
arXiv Detail & Related papers (2024-08-17T09:49:40Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - The Potential and Challenges of Evaluating Attitudes, Opinions, and Values in Large Language Models [28.743404185915697]
This paper provides a comprehensive overview of recent works on the evaluation of Attitudes, Opinions, Values (AOVs) in Large Language Models (LLMs)
By doing so, we address the potential and challenges with respect to understanding the model, human-AI alignment, and downstream application in social sciences.
arXiv Detail & Related papers (2024-06-16T22:59:18Z) - Evaluating the External and Parametric Knowledge Fusion of Large Language Models [72.40026897037814]
We develop a systematic pipeline for data construction and knowledge infusion to simulate knowledge fusion scenarios.
Our investigation reveals that enhancing parametric knowledge within LLMs can significantly bolster their capability for knowledge integration.
Our findings aim to steer future explorations on harmonizing external and parametric knowledge within LLMs.
arXiv Detail & Related papers (2024-05-29T11:48:27Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Exploring the Cognitive Knowledge Structure of Large Language Models: An
Educational Diagnostic Assessment Approach [50.125704610228254]
Large Language Models (LLMs) have not only exhibited exceptional performance across various tasks, but also demonstrated sparks of intelligence.
Recent studies have focused on assessing their capabilities on human exams and revealed their impressive competence in different domains.
We conduct an evaluation using MoocRadar, a meticulously annotated human test dataset based on Bloom taxonomy.
arXiv Detail & Related papers (2023-10-12T09:55:45Z) - Investigating the Factual Knowledge Boundary of Large Language Models with Retrieval Augmentation [109.8527403904657]
We show that large language models (LLMs) possess unwavering confidence in their knowledge and cannot handle the conflict between internal and external knowledge well.
Retrieval augmentation proves to be an effective approach in enhancing LLMs' awareness of knowledge boundaries.
We propose a simple method to dynamically utilize supporting documents with our judgement strategy.
arXiv Detail & Related papers (2023-07-20T16:46:10Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.