Goals and the Structure of Experience
- URL: http://arxiv.org/abs/2508.15013v1
- Date: Wed, 20 Aug 2025 19:05:24 GMT
- Title: Goals and the Structure of Experience
- Authors: Nadav Amir, Stas Tiomkin, Angela Langdon,
- Abstract summary: We describe a computational framework of goal-directed state representation in cognitive agents.<n>We introduce a construct of goal-directed, or telic, states, defined as classes of goal-equivalent experience distributions.
- Score: 3.072340427031969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purposeful behavior is a hallmark of natural and artificial intelligence. Its acquisition is often believed to rely on world models, comprising both descriptive (what is) and prescriptive (what is desirable) aspects that identify and evaluate state of affairs in the world, respectively. Canonical computational accounts of purposeful behavior, such as reinforcement learning, posit distinct components of a world model comprising a state representation (descriptive aspect) and a reward function (prescriptive aspect). However, an alternative possibility, which has not yet been computationally formulated, is that these two aspects instead co-emerge interdependently from an agent's goal. Here, we describe a computational framework of goal-directed state representation in cognitive agents, in which the descriptive and prescriptive aspects of a world model co-emerge from agent-environment interaction sequences, or experiences. Drawing on Buddhist epistemology, we introduce a construct of goal-directed, or telic, states, defined as classes of goal-equivalent experience distributions. Telic states provide a parsimonious account of goal-directed learning in terms of the statistical divergence between behavioral policies and desirable experience features. We review empirical and theoretical literature supporting this novel perspective and discuss its potential to provide a unified account of behavioral, phenomenological and neural dimensions of purposeful behaviors across diverse substrates.
Related papers
- Agentic Reasoning for Large Language Models [122.81018455095999]
Reasoning is a fundamental cognitive process underlying inference, problem-solving, and decision-making.<n>Large language models (LLMs) demonstrate strong reasoning capabilities in closed-world settings, but struggle in open-ended and dynamic environments.<n>Agentic reasoning marks a paradigm shift by reframing LLMs as autonomous agents that plan, act, and learn through continual interaction.
arXiv Detail & Related papers (2026-01-18T18:58:23Z) - Goal-Directedness is in the Eye of the Beholder [48.937781898861815]
Probing for goal-directed behavior comes in two flavors: Behavioral and mechanistic.<n>We identify technical and conceptual problems that arise from formalizing goals in agent systems.<n>We outline new directions for modeling goal-directedness as an emergent property of dynamic, multi-agent systems.
arXiv Detail & Related papers (2025-08-18T11:04:18Z) - Cognitive Science-Inspired Evaluation of Core Capabilities for Object Understanding in AI [12.186516430861882]
We present a comprehensive overview of the main theoretical frameworks in objecthood research.<n>We evaluate how current AI paradigms approach and test objecthood capabilities compared to those in cognitive science.<n>We find that, whilst benchmarks can detect that AI systems model isolated aspects of objecthood, the benchmarks cannot detect when AI systems lack functional integration across these capabilities.
arXiv Detail & Related papers (2025-03-27T16:35:02Z) - Revealing emergent human-like conceptual representations from language prediction [90.73285317321312]
Large language models (LLMs) trained solely through next-token prediction on text exhibit strikingly human-like behaviors.<n>Are these models developing concepts akin to those of humans?<n>We found that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Disentangling Representations through Multi-task Learning [0.0]
We provide experimental and theoretical results guaranteeing the emergence of disentangled representations in agents that optimally solve classification tasks.<n>We experimentally validate these predictions in RNNs trained to multi-task, which learn disentangled representations in the form of continuous attractors.<n>We find that transformers are particularly suited for disentangling representations, which might explain their unique world understanding abilities.
arXiv Detail & Related papers (2024-07-15T21:32:58Z) - Learning telic-controllable state representations [3.4530027457862]
We present a computational framework for state representation learning in bounded agents.<n>We introduce the concept of telic-controllability to characterize the tradeoff between the granularity of a telic state representation and the policy complexity required to reach all telic states.<n>Our framework highlights the role of deliberate ignorance -- knowing what to ignore -- for learning state representations that balance goal flexibility and cognitive complexity.
arXiv Detail & Related papers (2024-06-20T16:38:25Z) - On the Role of Entity and Event Level Conceptualization in Generalizable Reasoning: A Survey of Tasks, Methods, Applications, and Future Directions [62.06913340614293]
This paper proposes a categorization of different types of conceptualizations into four levels based on the types of instances being conceptualized.<n>We present the first comprehensive survey of over 150 papers, surveying various definitions, resources, methods, and downstream applications related to conceptualization.
arXiv Detail & Related papers (2024-06-16T10:32:41Z) - A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning [48.59516337905877]
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents.
Recent work has developed theoretical insights into these algorithms.
We take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective.
arXiv Detail & Related papers (2024-06-04T07:22:12Z) - Active Inference as a Model of Agency [1.9019250262578857]
We show that any behaviour complying with physically sound assumptions about how biological agents interact with the world integrates exploration and exploitation.
This description, known as active inference, refines the free energy principle, a popular descriptive framework for action and perception originating in neuroscience.
arXiv Detail & Related papers (2024-01-23T17:09:25Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Models we Can Trust: Toward a Systematic Discipline of (Agent-Based)
Model Interpretation and Validation [0.0]
We advocate the development of a discipline of interacting with and extracting information from models.
We outline some directions for the development of a such a discipline.
arXiv Detail & Related papers (2021-02-23T10:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.