Self-Explanation in Social AI Agents
- URL: http://arxiv.org/abs/2501.13945v1
- Date: Sun, 19 Jan 2025 03:03:15 GMT
- Title: Self-Explanation in Social AI Agents
- Authors: Rhea Basappa, Mustafa Tekman, Hong Lu, Benjamin Faught, Sandeep Kakar, Ashok K. Goel,
- Abstract summary: We present a method of self-explanation that uses introspection over a self-model of an AI social assistant.
The self-model is captured as a functional model that specifies how the methods of the agent use knowledge to achieve its tasks.
We evaluate the self-explanation of the AI social assistant for completeness and correctness.
- Score: 3.6817715124929578
- License:
- Abstract: Social AI agents interact with members of a community, thereby changing the behavior of the community. For example, in online learning, an AI social assistant may connect learners and thereby enhance social interaction. These social AI assistants too need to explain themselves in order to enhance transparency and trust with the learners. We present a method of self-explanation that uses introspection over a self-model of an AI social assistant. The self-model is captured as a functional model that specifies how the methods of the agent use knowledge to achieve its tasks. The process of generating self-explanations uses Chain of Thought to reflect on the self-model and ChatGPT to provide explanations about its functioning. We evaluate the self-explanation of the AI social assistant for completeness and correctness. We also report on its deployment in a live class.
Related papers
- Combining Cognitive and Generative AI for Self-explanation in Interactive AI Agents [1.1259354267881174]
This study investigates the convergence of cognitive AI and generative AI for self-explanation in interactive AI agents such as VERA.
From a cognitive AI viewpoint, we endow VERA with a functional model of its own design, knowledge, and reasoning represented in the Task--Method--Knowledge (TMK) language.
From the perspective of generative AI, we use ChatGPT, LangChain, and Chain-of-Thought to answer user questions based on the VERA TMK model.
arXiv Detail & Related papers (2024-07-25T18:46:11Z) - Brain-inspired and Self-based Artificial Intelligence [23.068338822392544]
"Can machines think?" and the Turing Test to assess whether machines could achieve human-level intelligence is one of the roots of AI.
This paper challenge the idea of a "thinking machine" supported by current AIs since there is no sense of self in them.
Current artificial intelligence is only seemingly intelligent information processing and does not truly understand or be subjectively aware of oneself.
arXiv Detail & Related papers (2024-02-29T01:15:17Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - AI Autonomy : Self-Initiated Open-World Continual Learning and
Adaptation [16.96197233523911]
This paper proposes a framework for the research of building autonomous and continual learning enabled AI agents.
The key challenge is how to automate the process so that it is carried out continually on the agent's own initiative.
arXiv Detail & Related papers (2022-03-17T00:07:02Z) - Self-Initiated Open World Learning for Autonomous AI Agents [16.41396764793912]
As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous.
This paper proposes a theoretic framework for this learning paradigm to promote the research of building Self-initiated Open world Learning agents.
arXiv Detail & Related papers (2021-10-21T18:11:02Z) - Crossing the Tepper Line: An Emerging Ontology for Describing the
Dynamic Sociality of Embodied AI [0.9176056742068814]
We show how embodied AI can manifest as "socially embodied AI"
We define this as the state that embodied AI "circumstantially" take on within interactive contexts when perceived as both social and agentic by people.
arXiv Detail & Related papers (2021-03-15T00:45:44Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Emergent Social Learning via Multi-agent Reinforcement Learning [91.57176641192771]
Social learning is a key component of human and animal intelligence.
This paper investigates whether independent reinforcement learning agents can learn to use social learning to improve their performance.
arXiv Detail & Related papers (2020-10-01T17:54:14Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.