Human-Machine Teaming for UAVs: An Experimentation Platform
- URL: http://arxiv.org/abs/2312.11718v1
- Date: Mon, 18 Dec 2023 21:35:02 GMT
- Title: Human-Machine Teaming for UAVs: An Experimentation Platform
- Authors: Laila El Moujtahid and Sai Krishna Gottipati and Clod\'eric Mars and
Matthew E. Taylor
- Abstract summary: We present the Cogment human-machine teaming experimentation platform.
It implements human-machine teaming (HMT) use cases that can involve learning AI agents, static AI agents, and humans.
We hope to facilitate further research on human-machine teaming in critical systems and defense environments.
- Score: 6.809734620480709
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Full automation is often not achievable or desirable in critical systems with
high-stakes decisions. Instead, human-AI teams can achieve better results. To
research, develop, evaluate, and validate algorithms suited for such teaming,
lightweight experimentation platforms that enable interactions between humans
and multiple AI agents are necessary. However, there are limited examples of
such platforms for defense environments. To address this gap, we present the
Cogment human-machine teaming experimentation platform, which implements
human-machine teaming (HMT) use cases that features heterogeneous multi-agent
systems and can involve learning AI agents, static AI agents, and humans. It is
built on the Cogment platform and has been used for academic research,
including work presented at the ALA workshop at AAMAS this year [1]. With this
platform, we hope to facilitate further research on human-machine teaming in
critical systems and defense environments.
Related papers
- CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - The AI Collaborator: Bridging Human-AI Interaction in Educational and Professional Settings [3.506120162002989]
AI Collaborator, powered by OpenAI's GPT-4, is a groundbreaking tool designed for human-AI collaboration research.
Its standout feature is the ability for researchers to create customized AI personas for diverse experimental setups.
This functionality is essential for simulating various interpersonal dynamics in team settings.
arXiv Detail & Related papers (2024-05-16T22:14:54Z) - Socially Pertinent Robots in Gerontological Healthcare [78.35311825198136]
This paper is an attempt to partially answer the question, via two waves of experiments with patients and companions in a day-care gerontological facility in Paris with a full-sized humanoid robot endowed with social and conversational interaction capabilities.
Overall, the users are receptive to this technology, especially when the robot perception and action skills are robust to environmental clutter and flexible to handle a plethora of different interactions.
arXiv Detail & Related papers (2024-04-11T08:43:37Z) - Mixed-Initiative Human-Robot Teaming under Suboptimality with Online Bayesian Adaptation [0.6591036379613505]
We develop computational modeling and optimization techniques for enhancing the performance of suboptimal human-agent teams.
We adopt an online Bayesian approach that enables a robot to infer people's willingness to comply with its assistance in a sequential decision-making game.
Our user studies show that user preferences and team performance indeed vary with robot intervention styles.
arXiv Detail & Related papers (2024-03-24T14:38:18Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Human-Centered AI for Data Science: A Systematic Approach [48.71756559152512]
Human-Centered AI (HCAI) refers to the research effort that aims to design and implement AI techniques to support various human tasks.
We illustrate how we approach HCAI using a series of research projects around Data Science (DS) works as a case study.
arXiv Detail & Related papers (2021-10-03T21:47:13Z) - Teaming up with information agents [0.0]
Our aim is to study how humans can collaborate with information agents.
We propose some appropriate team design patterns, and test them using our Collaborative Intelligence Analysis (CIA) tool.
arXiv Detail & Related papers (2021-01-15T14:26:12Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.