Meta-learning ecological priors from large language models explains human learning and decision making
- URL: http://arxiv.org/abs/2509.00116v2
- Date: Wed, 03 Sep 2025 03:16:09 GMT
- Title: Meta-learning ecological priors from large language models explains human learning and decision making
- Authors: Akshay K. Jagadish, Mirko Thalmann, Julian Coda-Forno, Marcel Binz, Eric Schulz,
- Abstract summary: We introduce ecologically rational analysis, a computational framework that unifies the normative foundations of rational analysis with ecological grounding.<n>We develop a new class of learning algorithms: Ecologically Rational Meta-learned Inference (ERMI)<n>ERMI internalizes the statistical regularities of naturalistic problem spaces and adapts flexibly to novel situations.<n>Our results suggest that much of human cognition may reflect adaptive alignment to the ecological structure of the problems we encounter in everyday life.
- Score: 24.65158566183862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human cognition is profoundly shaped by the environments in which it unfolds. Yet, it remains an open question whether learning and decision making can be explained as a principled adaptation to the statistical structure of real-world tasks. We introduce ecologically rational analysis, a computational framework that unifies the normative foundations of rational analysis with ecological grounding. Leveraging large language models to generate ecologically valid cognitive tasks at scale, and using meta-learning to derive rational models optimized for these environments, we develop a new class of learning algorithms: Ecologically Rational Meta-learned Inference (ERMI). ERMI internalizes the statistical regularities of naturalistic problem spaces and adapts flexibly to novel situations, without requiring hand-crafted heuristics or explicit parameter updates. We show that ERMI captures human behavior across 15 experiments spanning function learning, category learning, and decision making, outperforming several established cognitive models in trial-by-trial prediction. Our results suggest that much of human cognition may reflect adaptive alignment to the ecological structure of the problems we encounter in everyday life.
Related papers
- Human Simulation Computation: A Human-Inspired Framework for Adaptive AI Systems [0.11844977816228043]
Human Computation Simulation (HSC) models intelligence as a continuous, closed-loop process involving thinking, action, learning, reflection, and activity scheduling.<n> HSC incorporates commonly used human thinking strategies across all stages of the internal reasoning process.<n>Through theoretical analysis, we argue that human simulation strategies cannot be fully learned from language material alone.
arXiv Detail & Related papers (2026-01-20T12:00:04Z) - Automatic Adaptation to Concept Complexity and Subjective Natural Concepts: A Cognitive Model based on Chunking [45.88028371034407]
We show how the CogAct computational model grounds concept learning in cognitive processes and structures.<n>We offer novel ways of designing human benchmarks for concept learning experiments and simulations.<n>Our approach may also be used in psychological applications that move away from modelling the average participant.
arXiv Detail & Related papers (2025-12-21T09:43:20Z) - The Imperfect Learner: Incorporating Developmental Trajectories in Memory-based Student Simulation [55.722188569369656]
This paper introduces a novel framework for memory-based student simulation.<n>It incorporates developmental trajectories through a hierarchical memory mechanism with structured knowledge representation.<n>In practice, we implement a curriculum-aligned simulator grounded on the Next Generation Science Standards.
arXiv Detail & Related papers (2025-11-08T08:05:43Z) - The Universal Landscape of Human Reasoning [60.72403709545137]
We introduce Information Flow Tracking (IF-Track) to quantify information entropy and gain at each reasoning step.<n>We show that IF-Track captures essential reasoning features, identifies systematic error patterns, and characterizes individual differences.<n>This approach establishes a quantitative bridge between theory and measurement, offering mechanistic insights into the architecture of reasoning.
arXiv Detail & Related papers (2025-10-24T16:26:36Z) - The Physical Basis of Prediction: World Model Formation in Neural Organoids via an LLM-Generated Curriculum [0.0]
We present a curriculum of three scalable, closed-loop virtual environments designed to train human neural organoids.<n>We detail the design of three distinct task environments that demand progressively more sophisticated world models for successful decision-making.<n>This work bridges the gap between model-based reinforcement learning and computational neuroscience, offering a unique platform for studying embodiment, decision-making, and the physical basis of intelligence.
arXiv Detail & Related papers (2025-09-04T19:51:00Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [74.0242521818214]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.<n>We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.<n>We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Predictive Learning in Energy-based Models with Attractor Structures [5.542697199599134]
We introduce a framework that employs an energy-based model (EBM) to capture the nuanced processes of predicting observation after action within the neural system.<n>In experimental evaluations, our model demonstrates efficacy across diverse scenarios.
arXiv Detail & Related papers (2025-01-23T11:04:25Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - "What's my model inside of?": Exploring the role of environments for
grounded natural language understanding [1.8829370712240063]
In this thesis we adopt an ecological approach to grounded natural language understanding (NLU) research.
We develop novel training and annotation approaches for procedural text understanding based on text-based game environments.
We propose a design for AI-augmented "social thinking environments" for knowledge workers like scientists.
arXiv Detail & Related papers (2024-02-04T15:52:46Z) - Human-like Category Learning by Injecting Ecological Priors from Large Language Models into Neural Networks [8.213829427624407]
We develop a class of models called ecologically rational meta-learned inference (ERMI)
ERMI quantitatively explains human data better than seven other cognitive models in two different experiments.
We show that ERMI's ecologically valid priors allow it to achieve state-of-the-art performance on the OpenML-CC18 classification benchmark.
arXiv Detail & Related papers (2024-02-02T16:32:04Z) - FREE: The Foundational Semantic Recognition for Modeling Environmental Ecosystems [56.0640340392818]
We introduce a framework, FREE, that enables the use of varying features and available information to train a universal model.<n>The core idea is to map available environmental data into a text space and then convert the traditional predictive modeling task in environmental science to a semantic recognition problem.<n>Our evaluation on two societally important real-world applications, stream water temperature prediction and crop yield prediction, demonstrates the superiority of FREE over multiple baselines.
arXiv Detail & Related papers (2023-11-17T00:53:09Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - CogReact: A Reinforced Framework to Model Human Cognitive Reaction Modulated by Dynamic Intervention [11.149593958041937]
We propose CogReact, integrating drift-diffusion with deep reinforcement learning to simulate granular effects of dynamic environmental stimuli on human cognitive process.<n>It improves cognition modelling by considering temporal effect of environmental stimuli on cognitive process and captures both subject-specific and stimuli-specific behavioural differences.<n>Overall, it demonstrates a powerful, data-driven methodology to simulate, align with, and understand the vagaries of human cognitive response in dynamic contexts.
arXiv Detail & Related papers (2023-01-15T23:46:37Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.