Capturing Humans' Mental Models of AI: An Item Response Theory Approach
- URL: http://arxiv.org/abs/2305.09064v1
- Date: Mon, 15 May 2023 23:17:26 GMT
- Title: Capturing Humans' Mental Models of AI: An Item Response Theory Approach
- Authors: Markelle Kelly, Aakriti Kumar, Padhraic Smyth, Mark Steyvers
- Abstract summary: We show that people expect AI agents' performance to be significantly better on average than the performance of other humans.
Our results indicate that people expect AI agents' performance to be significantly better on average than the performance of other humans.
- Score: 12.129622383429597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving our understanding of how humans perceive AI teammates is an
important foundation for our general understanding of human-AI teams. Extending
relevant work from cognitive science, we propose a framework based on item
response theory for modeling these perceptions. We apply this framework to
real-world experiments, in which each participant works alongside another
person or an AI agent in a question-answering setting, repeatedly assessing
their teammate's performance. Using this experimental data, we demonstrate the
use of our framework for testing research questions about people's perceptions
of both AI agents and other people. We contrast mental models of AI teammates
with those of human teammates as we characterize the dimensionality of these
mental models, their development over time, and the influence of the
participants' own self-perception. Our results indicate that people expect AI
agents' performance to be significantly better on average than the performance
of other humans, with less variation across different types of problems. We
conclude with a discussion of the implications of these findings for human-AI
interaction.
Related papers
- Measuring Human Contribution in AI-Assisted Content Generation [68.03658922067487]
This study raises the research question of measuring human contribution in AI-assisted content generation.
By calculating mutual information between human input and AI-assisted output relative to self-information of AI-assisted output, we quantify the proportional information contribution of humans in content generation.
arXiv Detail & Related papers (2024-08-27T05:56:04Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI [20.21053807133341]
We try to provide an account of what constitutes a human-aware AI system.
We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with.
arXiv Detail & Related papers (2024-05-13T14:17:52Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - On the Perception of Difficulty: Differences between Humans and AI [0.0]
Key challenge in the interaction of humans with AI is the estimation of difficulty for human and AI agents for single task instances.
Research in the field of human-AI interaction estimates the perceived difficulty of humans and AI independently from each other.
Research to date has not yet adequately examined the differences in the perceived difficulty of humans and AI.
arXiv Detail & Related papers (2023-04-19T16:42:54Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - A Mental-Model Centric Landscape of Human-AI Symbiosis [31.14516396625931]
We introduce a significantly general version of human-aware AI interaction scheme, called generalized human-aware interaction (GHAI)
We will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
arXiv Detail & Related papers (2022-02-18T22:08:08Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI
Interactions [8.785345834486057]
We characterize how humans use AI suggestions relative to equivalent suggestions from a group of peer humans.
We find that participants' beliefs about the human versus AI performance on a given task affects whether or not they heed the advice.
arXiv Detail & Related papers (2021-07-14T21:33:14Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.