On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness
- URL: http://arxiv.org/abs/2306.13657v2
- Date: Thu, 29 Jun 2023 17:54:06 GMT
- Title: On Computational Mechanisms for Shared Intentionality, and Speculation
on Rationality and Consciousness
- Authors: John Rushby
- Abstract summary: A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork.
This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality.
I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A singular attribute of humankind is our ability to undertake novel,
cooperative behavior, or teamwork. This requires that we can communicate goals,
plans, and ideas between the brains of individuals to create shared
intentionality. Using the information processing model of David Marr, I derive
necessary characteristics of basic mechanisms to enable shared intentionality
between prelinguistic computational agents and indicate how these could be
implemented in present-day AI-based robots.
More speculatively, I suggest the mechanisms derived by this thought
experiment apply to humans and extend to provide explanations for human
rationality and aspects of intentional and phenomenal consciousness that accord
with observation. This yields what I call the Shared Intentionality First
Theory (SIFT) for rationality and consciousness.
The significance of shared intentionality has been recognized and advocated
previously, but typically from a sociological or behavioral point of view. SIFT
complements prior work by applying a computer science perspective to the
underlying mechanisms.
Related papers
- The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Heuristic Reasoning in AI: Instrumental Use and Mimetic Absorption [0.2209921757303168]
We propose a novel program of reasoning for artificial intelligence (AI)
We show that AIs manifest an adaptive balancing of precision and efficiency, consistent with principles of resource-rational human cognition.
Our findings reveal a nuanced picture of AI cognition, where trade-offs between resources and objectives lead to the emulation of biological systems.
arXiv Detail & Related papers (2024-03-14T13:53:05Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Learning Theory of Mind via Dynamic Traits Attribution [59.9781556714202]
We propose a new neural ToM architecture that learns to generate a latent trait vector of an actor from the past trajectories.
This trait vector then multiplicatively modulates the prediction mechanism via a fast weights' scheme in the prediction neural network.
We empirically show that the fast weights provide a good inductive bias to model the character traits of agents and hence improves mindreading ability.
arXiv Detail & Related papers (2022-04-17T11:21:18Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Neurocomputational Account of Flexible Goal-directed Cognition and
Consciousness: The Goal-Aligning Representation Internal Manipulation Theory
(GARIM) [0.9669369645900444]
Goal-directed manipulation of representations is a key element of human flexible behaviour.
GarIM theory integrates key aspects of the main theories of consciousness into the functional neuro-computational framework of goal-directed behaviour.
Proposal has implications for experimental studies on consciousness and clinical aspects of conscious goal-directed behaviour.
arXiv Detail & Related papers (2019-12-31T18:45:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.