Agent Assessment of Others Through the Lens of Self
- URL: http://arxiv.org/abs/2312.11357v1
- Date: Mon, 18 Dec 2023 17:15:04 GMT
- Title: Agent Assessment of Others Through the Lens of Self
- Authors: Jasmine A. Berry
- Abstract summary: The paper argues that the quality of an autonomous agent's introspective capabilities of self are crucial in mirroring quality human-like understandings of other agents.
Ultimately, the vision set forth is not merely of machines that compute but of entities that introspect, empathize, and understand.
- Score: 1.223779595809275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The maturation of cognition, from introspection to understanding others, has
long been a hallmark of human development. This position paper posits that for
AI systems to truly emulate or approach human-like interactions, especially
within multifaceted environments populated with diverse agents, they must first
achieve an in-depth and nuanced understanding of self. Drawing parallels with
the human developmental trajectory from self-awareness to mentalizing (also
called theory of mind), the paper argues that the quality of an autonomous
agent's introspective capabilities of self are crucial in mirroring quality
human-like understandings of other agents. While counterarguments emphasize
practicality, computational efficiency, and ethical concerns, this position
proposes a development approach, blending algorithmic considerations of
self-referential processing. Ultimately, the vision set forth is not merely of
machines that compute but of entities that introspect, empathize, and
understand, harmonizing with the complex compositions of human cognition.
Related papers
- Learning mental states estimation through self-observation: a developmental synergy between intentions and beliefs representations in a deep-learning model of Theory of Mind [0.35154948148425685]
Theory of Mind (ToM) is the ability to attribute beliefs, intentions, or mental states to others.
We show a developmental synergy between learning to predict low-level mental states and attributing high-level ones.
We propose that our computational approach can inform the understanding of human social cognitive development.
arXiv Detail & Related papers (2024-07-25T13:15:25Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory [8.439724621886779]
The development of Large Language Models (LLMs) provides human-centered Artificial General Intelligence (AGI) with a glimmer of hope.
Empathy serves as a key emotional attribute of humanity, playing an irreplaceable role in human-centered AGI.
In this paper, we design an innovative encoder module inspired by self-presentation theory in sociology, which specifically processes sensibility and rationality sentences in dialogues.
arXiv Detail & Related papers (2023-12-14T07:38:12Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Conscious AI [6.061244362532694]
Recent advances in artificial intelligence have achieved human-scale speed and accuracy for classification tasks.
Current systems do not need to be conscious to recognize patterns and classify them.
For AI to progress to more complicated tasks requiring intuition and empathy, it must develop capabilities such as metathinking, creativity, and empathy akin to human self-awareness or consciousness.
arXiv Detail & Related papers (2021-05-12T15:53:44Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.