Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks
- URL: http://arxiv.org/abs/2411.10176v1
- Date: Fri, 15 Nov 2024 13:22:04 GMT
- Title: Let people fail! Exploring the influence of explainable virtual and robotic agents in learning-by-doing tasks
- Authors: Marco Matarese, Francesco Rea, Katharina J. Rohlfing, Alessandra Sciutti,
- Abstract summary: This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task.
Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved.
- Score: 45.23431596135002
- License:
- Abstract: Collaborative decision-making with artificial intelligence (AI) agents presents opportunities and challenges. While human-AI performance often surpasses that of individuals, the impact of such technology on human behavior remains insufficiently understood, primarily when AI agents can provide justifiable explanations for their suggestions. This study compares the effects of classic vs. partner-aware explanations on human behavior and performance during a learning-by-doing task. Three participant groups were involved: one interacting with a computer, another with a humanoid robot, and a third one without assistance. Results indicated that partner-aware explanations influenced participants differently based on the type of artificial agents involved. With the computer, participants enhanced their task completion times. At the same time, those interacting with the humanoid robot were more inclined to follow its suggestions, although they did not reduce their timing. Interestingly, participants autonomously performing the learning-by-doing task demonstrated superior knowledge acquisition than those assisted by explainable AI (XAI). These findings raise profound questions and have significant implications for automated tutoring and human-AI collaboration.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Don't be Fooled: The Misinformation Effect of Explanations in Human-AI Collaboration [11.824688232910193]
We run a study on AI-assisted decision-making in which humans were supported by XAI.
Our findings reveal a misinformation effect when incorrect explanations accompany correct AI advice.
This effect causes humans to infer flawed reasoning strategies, hindering task execution and demonstrating impaired procedural knowledge.
arXiv Detail & Related papers (2024-09-19T14:34:20Z) - On the Effect of Contextual Information on Human Delegation Behavior in
Human-AI collaboration [3.9253315480927964]
We study the effects of providing contextual information on human decisions to delegate instances to an AI.
We find that providing participants with contextual information significantly improves the human-AI team performance.
This research advances the understanding of human-AI interaction in human delegation and provides actionable insights for designing more effective collaborative systems.
arXiv Detail & Related papers (2024-01-09T18:59:47Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Improving Human-AI Collaboration With Descriptions of AI Behavior [14.904401331154062]
People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted.
To help people appropriately rely on AI aids, we propose showing them behavior descriptions.
arXiv Detail & Related papers (2023-01-06T00:33:08Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.