A Mental-Model Centric Landscape of Human-AI Symbiosis
- URL: http://arxiv.org/abs/2202.09447v1
- Date: Fri, 18 Feb 2022 22:08:08 GMT
- Title: A Mental-Model Centric Landscape of Human-AI Symbiosis
- Authors: Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati
- Abstract summary: We introduce a significantly general version of human-aware AI interaction scheme, called generalized human-aware interaction (GHAI)
We will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
- Score: 31.14516396625931
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been significant recent interest in developing AI agents capable of
effectively interacting and teaming with humans. While each of these works try
to tackle a problem quite central to the problem of human-AI interaction, they
tend to rely on myopic formulations that obscure the possible inter-relatedness
and complementarity of many of these works. The human-aware AI framework was a
recent effort to provide a unified account for human-AI interaction by casting
them in terms of their relationship to various mental models. Unfortunately,
the current accounts of human-aware AI are insufficient to explain the
landscape of the work doing in the space of human-AI interaction due to their
focus on limited settings. In this paper, we aim to correct this shortcoming by
introducing a significantly general version of human-aware AI interaction
scheme, called generalized human-aware interaction (GHAI), that talks about
(mental) models of six types. Through this paper, we will see how this new
framework allows us to capture the various works done in the space of human-AI
interaction and identify the fundamental behavioral patterns supported by these
works. We will also use this framework to identify potential gaps in the
current literature and suggest future research directions to address these
shortcomings.
Related papers
- Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
Playful interactions with AI systems naturally emerged as an important way for users to make sense of the technology.
We target this gap by investigating playful interactions exhibited by users of an emerging AI technology, ChatGPT.
Through a thematic analysis of 372 user-generated posts on the ChatGPT subreddit, we found that more than half of user discourse revolves around playful interactions.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Capturing Humans' Mental Models of AI: An Item Response Theory Approach [12.129622383429597]
We show that people expect AI agents' performance to be significantly better on average than the performance of other humans.
Our results indicate that people expect AI agents' performance to be significantly better on average than the performance of other humans.
arXiv Detail & Related papers (2023-05-15T23:17:26Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.