Agent Assessment of Others Through the Lens of Self
- URL: http://arxiv.org/abs/2312.11357v1
- Date: Mon, 18 Dec 2023 17:15:04 GMT
- Title: Agent Assessment of Others Through the Lens of Self
- Authors: Jasmine A. Berry
- Abstract summary: The paper argues that the quality of an autonomous agent's introspective capabilities of self are crucial in mirroring quality human-like understandings of other agents.
Ultimately, the vision set forth is not merely of machines that compute but of entities that introspect, empathize, and understand.
- Score: 1.223779595809275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The maturation of cognition, from introspection to understanding others, has
long been a hallmark of human development. This position paper posits that for
AI systems to truly emulate or approach human-like interactions, especially
within multifaceted environments populated with diverse agents, they must first
achieve an in-depth and nuanced understanding of self. Drawing parallels with
the human developmental trajectory from self-awareness to mentalizing (also
called theory of mind), the paper argues that the quality of an autonomous
agent's introspective capabilities of self are crucial in mirroring quality
human-like understandings of other agents. While counterarguments emphasize
practicality, computational efficiency, and ethical concerns, this position
proposes a development approach, blending algorithmic considerations of
self-referential processing. Ultimately, the vision set forth is not merely of
machines that compute but of entities that introspect, empathize, and
understand, harmonizing with the complex compositions of human cognition.
Related papers
- From Human to Machine Psychology: A Conceptual Framework for Understanding Well-Being in Large Language Models [0.0]
This paper introduces the concept of machine flourishing and proposes the PAPERS framework.<n>Our findings underscore the importance of developing AI-specific models of flourishing that account for both human-aligned and system-specific priorities.
arXiv Detail & Related papers (2025-06-14T20:14:02Z) - Truly Self-Improving Agents Require Intrinsic Metacognitive Learning [59.60803539959191]
Self-improving agents aim to continuously acquire new capabilities with minimal supervision.<n>Current approaches face two key limitations: their self-improvement processes are often rigid, fail to generalize across tasks domains, and struggle to scale with increasing agent capabilities.<n>We argue that effective self-improvement requires intrinsic metacognitive learning, defined as an agent's intrinsic ability to actively evaluate, reflect on, and adapt its own learning processes.
arXiv Detail & Related papers (2025-06-05T14:53:35Z) - Exploring Societal Concerns and Perceptions of AI: A Thematic Analysis through the Lens of Problem-Seeking [0.0]
This study introduces a novel conceptual framework distinguishing problem-seeking from problem-solving to clarify the unique features of human intelligence in contrast to AI.<n>The framework emphasizes that while AI excels at efficiency and optimization, it lacks the orientation derived from grounding and the embodiment flexibility intrinsic to human cognition.
arXiv Detail & Related papers (2025-05-29T18:24:34Z) - Sensorimotor features of self-awareness in multimodal large language models [0.18415777204665024]
Self-awareness underpins intelligent, autonomous behavior.<n>Recent advances in AI achieve human-like performance in tasks integrating multimodal information.<n>We explore whether multimodal LLMs can develop self-awareness solely through sensorimotor experiences.
arXiv Detail & Related papers (2025-05-25T17:26:28Z) - Measurement of LLM's Philosophies of Human Nature [113.47929131143766]
We design the standardized psychological scale specifically targeting large language models (LLM)
We show that current LLMs exhibit a systemic lack of trust in humans.
We propose a mental loop learning framework, which enables LLM to continuously optimize its value system.
arXiv Detail & Related papers (2025-04-03T06:22:19Z) - Teleology-Driven Affective Computing: A Causal Framework for Sustained Well-Being [0.1636303041090359]
We propose a teleology-driven affective computing framework that unifies major emotion theories.
We advocate for creating a "dataverse" of personal affective events.
We introduce a meta-reinforcement learning paradigm to train agents in simulated environments.
arXiv Detail & Related papers (2025-02-24T14:07:53Z) - Autonomous Alignment with Human Value on Altruism through Considerate Self-imagination and Theory of Mind [7.19351244815121]
Altruistic behavior in human society originates from humans' capacity for empathizing others, known as Theory of Mind (ToM)
We are committed to endow agents with considerate self-imagination and ToM capabilities, driving them through implicit intrinsic motivations to autonomously align with human altruistic values.
arXiv Detail & Related papers (2024-12-31T07:31:46Z) - An AI Theory of Mind Will Enhance Our Collective Intelligence [1.8434042562191815]
We show that flexible collective intelligence in human social settings is improved by a particular cognitive tool: our Theory of Mind.<n>To make this case, we consider the large-scale impact AI can have as agential actors in a'social ecology' rather than as mere technological tools.
arXiv Detail & Related papers (2024-11-14T03:58:50Z) - Learning mental states estimation through self-observation: a developmental synergy between intentions and beliefs representations in a deep-learning model of Theory of Mind [0.35154948148425685]
Theory of Mind (ToM) is the ability to attribute beliefs, intentions, or mental states to others.
We show a developmental synergy between learning to predict low-level mental states and attributing high-level ones.
We propose that our computational approach can inform the understanding of human social cognitive development.
arXiv Detail & Related papers (2024-07-25T13:15:25Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory [8.439724621886779]
The development of Large Language Models (LLMs) provides human-centered Artificial General Intelligence (AGI) with a glimmer of hope.
Empathy serves as a key emotional attribute of humanity, playing an irreplaceable role in human-centered AGI.
In this paper, we design an innovative encoder module inspired by self-presentation theory in sociology, which specifically processes sensibility and rationality sentences in dialogues.
arXiv Detail & Related papers (2023-12-14T07:38:12Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Conscious AI [6.061244362532694]
Recent advances in artificial intelligence have achieved human-scale speed and accuracy for classification tasks.
Current systems do not need to be conscious to recognize patterns and classify them.
For AI to progress to more complicated tasks requiring intuition and empathy, it must develop capabilities such as metathinking, creativity, and empathy akin to human self-awareness or consciousness.
arXiv Detail & Related papers (2021-05-12T15:53:44Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.