Teaming up with information agents
- URL: http://arxiv.org/abs/2101.06133v1
- Date: Fri, 15 Jan 2021 14:26:12 GMT
- Title: Teaming up with information agents
- Authors: Jurriaan van Diggelen, Wiard Jorritsma, Bob van der Vecht
- Abstract summary: Our aim is to study how humans can collaborate with information agents.
We propose some appropriate team design patterns, and test them using our Collaborative Intelligence Analysis (CIA) tool.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the intricacies involved in designing a computer as a teampartner, we
can observe patterns in team behavior which allow us to describe at a general
level how AI systems are to collaborate with humans. Whereas most work on
human-machine teaming has focused on physical agents (e.g. robotic systems),
our aim is to study how humans can collaborate with information agents. We
propose some appropriate team design patterns, and test them using our
Collaborative Intelligence Analysis (CIA) tool.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - The AI Collaborator: Bridging Human-AI Interaction in Educational and Professional Settings [3.506120162002989]
AI Collaborator, powered by OpenAI's GPT-4, is a groundbreaking tool designed for human-AI collaboration research.
Its standout feature is the ability for researchers to create customized AI personas for diverse experimental setups.
This functionality is essential for simulating various interpersonal dynamics in team settings.
arXiv Detail & Related papers (2024-05-16T22:14:54Z) - Human-Machine Teaming for UAVs: An Experimentation Platform [6.809734620480709]
We present the Cogment human-machine teaming experimentation platform.
It implements human-machine teaming (HMT) use cases that can involve learning AI agents, static AI agents, and humans.
We hope to facilitate further research on human-machine teaming in critical systems and defense environments.
arXiv Detail & Related papers (2023-12-18T21:35:02Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Coordination with Humans via Strategy Matching [5.072077366588174]
We present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task.
By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge.
Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners.
arXiv Detail & Related papers (2022-10-27T01:00:50Z) - Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming [2.9327503320877457]
We provide a formal knowledge representation design that enables the swarm Artificial Intelligence to reason about its environment and system.
We propose the Ontology for Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming between humans and teams.
arXiv Detail & Related papers (2022-03-24T09:36:50Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Watch-And-Help: A Challenge for Social Perception and Human-AI
Collaboration [116.28433607265573]
We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in AI agents.
In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently.
We build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines.
arXiv Detail & Related papers (2020-10-19T21:48:31Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.