Human-AI Collaboration in Real-World Complex Environment with
Reinforcement Learning
- URL: http://arxiv.org/abs/2312.15160v1
- Date: Sat, 23 Dec 2023 04:27:24 GMT
- Title: Human-AI Collaboration in Real-World Complex Environment with
Reinforcement Learning
- Authors: Md Saiful Islam, Srijita Das, Sai Krishna Gottipati, William Duguay,
Clod\'eric Mars, Jalal Arabneydi, Antoine Fagette, Matthew Guzdial,
Matthew-E-Taylor
- Abstract summary: We show that learning from humans is effective and that human-AI collaboration outperforms human-controlled and fully autonomous AI agents.
We develop a user interface to allow humans to assist AI agents effectively.
- Score: 8.465957423148657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in reinforcement learning (RL) and Human-in-the-Loop (HitL)
learning have made human-AI collaboration easier for humans to team with AI
agents. Leveraging human expertise and experience with AI in intelligent
systems can be efficient and beneficial. Still, it is unclear to what extent
human-AI collaboration will be successful, and how such teaming performs
compared to humans or AI agents only. In this work, we show that learning from
humans is effective and that human-AI collaboration outperforms
human-controlled and fully autonomous AI agents in a complex simulation
environment. In addition, we have developed a new simulator for critical
infrastructure protection, focusing on a scenario where AI-powered drones and
human teams collaborate to defend an airport against enemy drone attacks. We
develop a user interface to allow humans to assist AI agents effectively. We
demonstrated that agents learn faster while learning from policy correction
compared to learning from humans or agents. Furthermore, human-AI collaboration
requires lower mental and temporal demands, reduces human effort, and yields
higher performance than if humans directly controlled all agents. In
conclusion, we show that humans can provide helpful advice to the RL agents,
allowing them to improve learning in a multi-agent setting.
Related papers
- CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - On the Utility of Accounting for Human Beliefs about AI Behavior in Human-AI Collaboration [9.371527955300323]
We develop a model of human beliefs that accounts for how humans reason about the behavior of their AI partners.
We then developed an AI agent that considers both human behavior and human beliefs in devising its strategy for working with humans.
arXiv Detail & Related papers (2024-06-10T06:39:37Z) - When combinations of humans and AI are useful: A systematic review and meta-analysis [0.0]
We conducted a meta-analysis of over 100 recent studies reporting over 300 effect sizes.
We found that, on average, human-AI combinations performed significantly worse than the best of humans or AI alone.
arXiv Detail & Related papers (2024-05-09T20:23:15Z) - Applying HCAI in developing effective human-AI teaming: A perspective
from human-AI joint cognitive systems [10.746728034149989]
Research and application have used human-AI teaming (HAT) as a new paradigm to develop AI systems.
We elaborate on our proposed conceptual framework of human-AI joint cognitive systems (HAIJCS)
We propose a conceptual framework of human-AI joint cognitive systems (HAIJCS) to represent and implement HAT.
arXiv Detail & Related papers (2023-07-08T06:26:38Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Artificial Intelligence & Cooperation [38.19500588776648]
The rise of Artificial Intelligence will bring with it an ever-increasing willingness to cede decision-making to machines.
But rather than just giving machines the power to make decisions that affect us, we need ways to work cooperatively with AI systems.
With success, cooperation between humans and AIs can build society just as human-human cooperation has.
arXiv Detail & Related papers (2020-12-10T23:54:31Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.