A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents
- URL: http://arxiv.org/abs/2204.02889v1
- Date: Wed, 6 Apr 2022 15:15:21 GMT
- Title: A Cognitive Framework for Delegation Between Error-Prone AI and Human
Agents
- Authors: Andrew Fuchs, Andrea Passarella, Marco Conti
- Abstract summary: We investigate the use of cognitively inspired models of behavior to predict the behavior of both human and AI agents.
The predicted behavior is used to delegate control between humans and AI agents through the use of an intermediary entity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With humans interacting with AI-based systems at an increasing rate, it is
necessary to ensure the artificial systems are acting in a manner which
reflects understanding of the human. In the case of humans and artificial AI
agents operating in the same environment, we note the significance of
comprehension and response to the actions or capabilities of a human from an
agent's perspective, as well as the possibility to delegate decisions either to
humans or to agents, depending on who is deemed more suitable at a certain
point in time. Such capabilities will ensure an improved responsiveness and
utility of the entire human-AI system. To that end, we investigate the use of
cognitively inspired models of behavior to predict the behavior of both human
and AI agents. The predicted behavior, and associated performance with respect
to a certain goal, is used to delegate control between humans and AI agents
through the use of an intermediary entity. As we demonstrate, this allows
overcoming potential shortcomings of either humans or agents in the pursuit of
a goal.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - On the Utility of Accounting for Human Beliefs about AI Intention in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that captures how humans interpret and reason about their AI partner's intentions.
We create an AI agent that incorporates both human behavior and human beliefs when devising its strategy for interacting with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - Approximating Human Models During Argumentation-based Dialogues [4.178382980763478]
Key challenge in Explainable AI Planning (XAIP) is model reconciliation.
We propose a novel framework that enables AI agents to learn and update a probabilistic human model.
arXiv Detail & Related papers (2024-05-28T23:22:18Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.