The Universal Landscape of Human Reasoning
- URL: http://arxiv.org/abs/2510.21623v1
- Date: Fri, 24 Oct 2025 16:26:36 GMT
- Title: The Universal Landscape of Human Reasoning
- Authors: Qiguang Chen, Jinhao Liu, Libo Qin, Yimeng Zhang, Yihao Liang, Shangxu Ren, Chengyu Luan, Dengyun Peng, Hanjing Li, Jiannan Guan, Zheng Yan, Jiaqi Wang, Mengkang Hu, Yantao Du, Zhi Chen, Xie Chen, Wanxiang Che,
- Abstract summary: We introduce Information Flow Tracking (IF-Track) to quantify information entropy and gain at each reasoning step.<n>We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences.<n>This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
- Score: 60.72403709545137
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding how information is dynamically accumulated and transformed in human reasoning has long challenged cognitive psychology, philosophy, and artificial intelligence. Existing accounts, from classical logic to probabilistic models, illuminate aspects of output or individual modelling, but do not offer a unified, quantitative description of general human reasoning dynamics. To solve this, we introduce Information Flow Tracking (IF-Track), that uses large language models (LLMs) as probabilistic encoder to quantify information entropy and gain at each reasoning step. Through fine-grained analyses across diverse tasks, our method is the first successfully models the universal landscape of human reasoning behaviors within a single metric space. We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences. Applied to discussion of advanced psychological theory, we first reconcile single- versus dual-process theories in IF-Track and discover the alignment of artificial and human cognition and how LLMs reshaping human reasoning process. This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
Related papers
- HumanLLM: Towards Personalized Understanding and Simulation of Human Nature [72.55730315685837]
HumanLLM is a foundation model designed for personalized understanding and simulation of individuals.<n>We first construct the Cognitive Genome, a large-scale corpus curated from real-world user data on platforms like Reddit, Twitter, Blogger, and Amazon.<n>We then formulate diverse learning tasks and perform supervised fine-tuning to empower the model to predict a wide range of individualized human behaviors, thoughts, and experiences.
arXiv Detail & Related papers (2026-01-22T09:27:27Z) - Modeling Open-World Cognition as On-Demand Synthesis of Probabilistic Models [93.1043186636177]
We explore the hypothesis that people use a combination of distributed and symbolic representations to construct bespoke mental models tailored to novel situations.<n>We propose a computational implementation of this idea -- a Model Synthesis Architecture''<n>We evaluate our MSA as a model of human judgments on a novel reasoning dataset.
arXiv Detail & Related papers (2025-07-16T18:01:03Z) - Evaluating AI Alignment in Eleven LLMs through Output-Based Analysis and Human Benchmarking [0.0]
Large language models (LLMs) are increasingly used in psychological research and practice, yet traditional benchmarks reveal little about the values they express in real interaction.<n>We introduce PAPERS, output-based evaluation of the values LLMs express.
arXiv Detail & Related papers (2025-06-14T20:14:02Z) - From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning [63.25540801694765]
Large Language Models (LLMs) demonstrate striking linguistic abilities, yet whether they achieve this same balance remains unclear.<n>We apply the Information Bottleneck principle to quantitatively compare how LLMs and humans navigate this compression-meaning trade-off.
arXiv Detail & Related papers (2025-05-21T16:29:00Z) - Adaptive Token Boundaries: Integrating Human Chunking Mechanisms into Multimodal LLMs [0.0]
This research presents a systematic investigation into the parallels between human cross-modal chunking mechanisms and token representation methodologies.<n>We propose a novel framework for dynamic cross-modal tokenization that incorporates adaptive boundaries, hierarchical representations, and alignment mechanisms grounded in cognitive science principles.
arXiv Detail & Related papers (2025-05-03T09:14:24Z) - Giving AI Personalities Leads to More Human-Like Reasoning [7.124736158080938]
We investigate the potential of AI to mimic diverse reasoning behaviors across a human population.<n>We designed reasoning tasks using a novel generalization of the Natural Language Inference (NLI) format.<n>We used personality-based prompting inspired by the Big Five personality model to elicit AI responses reflecting specific personality traits.
arXiv Detail & Related papers (2025-02-19T23:51:23Z) - Learning Discrete Concepts in Latent Hierarchical Models [73.01229236386148]
Learning concepts from natural high-dimensional data holds potential in building human-aligned and interpretable machine learning models.<n>We formalize concepts as discrete latent causal variables that are related via a hierarchical causal model.<n>We substantiate our theoretical claims with synthetic data experiments.
arXiv Detail & Related papers (2024-06-01T18:01:03Z) - Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption [0.2209921757303168]
We propose a novel program of reasoning for artificial intelligence (AI)
We show that AIs manifest an adaptive balancing of precision and efficiency, consistent with principles of resource-rational human cognition.
Our findings reveal a nuanced picture of AI cognition, where trade-offs between resources and objectives lead to the emulation of biological systems.
arXiv Detail & Related papers (2024-03-14T13:53:05Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Uncovering the Data-Related Limits of Human Reasoning Research: An
Analysis based on Recommender Systems [1.7478203318226309]
Cognitive science pursues the goal of modeling human-like intelligence from a theory-driven perspective.
Syllogistic reasoning is one of the core domains of human reasoning research.
Recent analyses of models' predictive performances revealed a stagnation in improvement.
arXiv Detail & Related papers (2020-03-11T10:12:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.