One Model, Two Minds: A Context-Gated Graph Learner that Recreates Human Biases
- URL: http://arxiv.org/abs/2509.08705v1
- Date: Wed, 10 Sep 2025 15:55:14 GMT
- Title: One Model, Two Minds: A Context-Gated Graph Learner that Recreates Human Biases
- Authors: Shalima Binta Manir, Tim Oates,
- Abstract summary: We introduce a novel Theory of Mind (ToM) framework inspired by dual-process theories from cognitive science.<n>Our model balances intuitive and deliberative reasoning through a learned context gate mechanism.<n>This work bridges artificial intelligence and cognitive theory, paving the way for AI systems exhibiting nuanced, human-like social cognition and adaptive decision-making capabilities.
- Score: 3.7958475517455947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel Theory of Mind (ToM) framework inspired by dual-process theories from cognitive science, integrating a fast, habitual graph-based reasoning system (System 1), implemented via graph convolutional networks (GCNs), and a slower, context-sensitive meta-adaptive learning system (System 2), driven by meta-learning techniques. Our model dynamically balances intuitive and deliberative reasoning through a learned context gate mechanism. We validate our architecture on canonical false-belief tasks and systematically explore its capacity to replicate hallmark cognitive biases associated with dual-process theory, including anchoring, cognitive-load fatigue, framing effects, and priming effects. Experimental results demonstrate that our dual-process approach closely mirrors human adaptive behavior, achieves robust generalization to unseen contexts, and elucidates cognitive mechanisms underlying reasoning biases. This work bridges artificial intelligence and cognitive theory, paving the way for AI systems exhibiting nuanced, human-like social cognition and adaptive decision-making capabilities.
Related papers
- Visual Categorization Across Minds and Models: Cognitive Analysis of Human Labeling and Neuro-Symbolic Integration [0.0]
This paper examines image labeling performance across human participants and deep neural networks.<n>We contrast human strategies such as analogical reasoning, shape-based recognition, and confidence modulation with AI's feature-based processing.<n>Our findings highlight key parallels and divergences between biological and artificial systems in representation, inference, and confidence calibration.
arXiv Detail & Related papers (2025-12-10T05:58:12Z) - The Universal Landscape of Human Reasoning [60.72403709545137]
We introduce Information Flow Tracking (IF-Track) to quantify information entropy and gain at each reasoning step.<n>We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences.<n>This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
arXiv Detail & Related papers (2025-10-24T16:26:36Z) - Think Socially via Cognitive Reasoning [94.60442643943696]
We introduce Cognitive Reasoning, a paradigm modeled on human social cognition.<n>CogFlow is a complete framework that instills this capability in LLMs.
arXiv Detail & Related papers (2025-09-26T16:27:29Z) - Incentivizing Dual Process Thinking for Efficient Large Language Model Reasoning [75.04643265875072]
Large reasoning models (LRMs) have demonstrated strong performance on complex reasoning tasks, but often suffer from overthinking.<n>Inspired by the dual process theory in cognitive science, we propose Adaptive Cognition Policy Optimization.<n>ACPO enables LRMs to achieve efficient reasoning through adaptive cognitive allocation and dynamic system switch.
arXiv Detail & Related papers (2025-05-22T07:15:08Z) - System 0/1/2/3: Quad-process theory for multi-timescale embodied collective cognitive systems [12.195073658696618]
This paper introduces the System 0/1/2/3 framework as an extension of dual-process theory, employing a quad-process model of cognition.<n>We contextualize this model within Bergson's philosophy by adopting multi-scale time theory to unify the diverse temporal dynamics of cognition.
arXiv Detail & Related papers (2025-03-08T09:31:53Z) - CogniDual Framework: Self-Training Large Language Models within a Dual-System Theoretical Framework for Improving Cognitive Tasks [39.43278448546028]
Kahneman's dual-system theory elucidates the human decision-making process, distinguishing between the rapid, intuitive System 1 and the deliberative, rational System 2.
Recent advancements have positioned large language Models (LLMs) as formidable tools nearing human-level proficiency in various cognitive tasks.
This study introduces the textbfCogniDual Framework for LLMs (CFLLMs), designed to assess whether LLMs can, through self-training, evolve from deliberate deduction to intuitive responses.
arXiv Detail & Related papers (2024-09-05T09:33:24Z) - Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption [0.2209921757303168]
We propose a novel program of reasoning for artificial intelligence (AI)
We show that AIs manifest an adaptive balancing of precision and efficiency, consistent with principles of resource-rational human cognition.
Our findings reveal a nuanced picture of AI cognition, where trade-offs between resources and objectives lead to the emulation of biological systems.
arXiv Detail & Related papers (2024-03-14T13:53:05Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Interpretable Reinforcement Learning Inspired by Piaget's Theory of
Cognitive Development [1.7778609937758327]
This paper entertains the idea that theories such as language of thought hypothesis (LOTH), script theory, and Piaget's cognitive development theory provide complementary approaches.
The proposed framework can be viewed as a step towards achieving human-like cognition in artificial intelligent systems.
arXiv Detail & Related papers (2021-02-01T00:29:01Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.