OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities
- URL: http://arxiv.org/abs/2305.16334v1
- Date: Tue, 23 May 2023 09:36:51 GMT
- Title: OlaGPT: Empowering LLMs With Human-like Problem-Solving Abilities
- Authors: Yuanzhen Xie, Tao Xie, Mingxiong Lin, WenTao Wei, Chenglin Li, Beibei
Kong, Lei Chen, Chengxiang Zhuo, Bo Hu, Zang Li
- Abstract summary: This paper introduces a novel intelligent framework, referred to as OlaGPT.
OlaGPT carefully studied a cognitive architecture framework, and propose to simulate certain aspects of human cognition.
The framework involves approximating different cognitive modules, including attention, memory, reasoning, learning, and corresponding scheduling and decision-making mechanisms.
- Score: 19.83434949066066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In most current research, large language models (LLMs) are able to perform
reasoning tasks by generating chains of thought through the guidance of
specific prompts. However, there still exists a significant discrepancy between
their capability in solving complex reasoning problems and that of humans. At
present, most approaches focus on chains of thought (COT) and tool use, without
considering the adoption and application of human cognitive frameworks. It is
well-known that when confronting complex reasoning challenges, humans typically
employ various cognitive abilities, and necessitate interaction with all
aspects of tools, knowledge, and the external environment information to
accomplish intricate tasks. This paper introduces a novel intelligent
framework, referred to as OlaGPT. OlaGPT carefully studied a cognitive
architecture framework, and propose to simulate certain aspects of human
cognition. The framework involves approximating different cognitive modules,
including attention, memory, reasoning, learning, and corresponding scheduling
and decision-making mechanisms. Inspired by the active learning mechanism of
human beings, it proposes a learning unit to record previous mistakes and
expert opinions, and dynamically refer to them to strengthen their ability to
solve similar problems. The paper also outlines common effective reasoning
frameworks for human problem-solving and designs Chain-of-Thought (COT)
templates accordingly. A comprehensive decision-making mechanism is also
proposed to maximize model accuracy. The efficacy of OlaGPT has been
stringently evaluated on multiple reasoning datasets, and the experimental
outcomes reveal that OlaGPT surpasses state-of-the-art benchmarks,
demonstrating its superior performance. Our implementation of OlaGPT is
available on GitHub: \url{https://github.com/oladata-team/OlaGPT}.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Unlocking Structured Thinking in Language Models with Cognitive Prompting [0.0]
We propose cognitive prompting as a novel approach to guide problem-solving in large language models.
We evaluate the effectiveness of cognitive prompting on Meta's LLaMA models.
arXiv Detail & Related papers (2024-10-03T19:53:47Z) - Mimicking Human Intuition: Cognitive Belief-Driven Q-Learning [5.960184723807347]
We propose Cognitive Belief-Driven Q-Learning (CBDQ), which integrates subjective belief modeling into the Q-learning framework.
CBDQ enhances decision-making accuracy by endowing agents with human-like learning and reasoning capabilities.
We evaluate the proposed method on discrete control benchmark tasks in various complicate environments.
arXiv Detail & Related papers (2024-10-02T16:50:29Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Confounder Identification-free Causal Visual Feature Learning [84.28462256571822]
We propose a novel Confounder Identification-free Causal Visual Feature Learning (CICF) method, which obviates the need for identifying confounders.
CICF models the interventions among different samples based on front-door criterion, and then approximates the global-scope intervening effect upon the instance-level interventions.
We uncover the relation between CICF and the popular meta-learning strategy MAML, and provide an interpretation of why MAML works from the theoretical perspective.
arXiv Detail & Related papers (2021-11-26T10:57:47Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.