Towards human-agent knowledge fusion (HAKF) in support of distributed
coalition teams
- URL: http://arxiv.org/abs/2010.12327v1
- Date: Fri, 23 Oct 2020 12:10:40 GMT
- Title: Towards human-agent knowledge fusion (HAKF) in support of distributed
coalition teams
- Authors: Dave Braines, Federico Cerutti, Marc Roig Vilamala, Mani Srivastava,
Lance Kaplan Alun Preece, Gavin Pearson
- Abstract summary: Future coalition operations can be augmented through agile teaming between human and machine agents.
In such a setting it is essential that the human agents can rapidly build trust in the machine agents.
We show how HAKF has the potential to bring value to both human and machine agents working as part of a distributed coalition team.
- Score: 3.939142878694769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Future coalition operations can be substantially augmented through agile
teaming between human and machine agents, but in a coalition context these
agents may be unfamiliar to the human users and expected to operate in a broad
set of scenarios rather than being narrowly defined for particular purposes. In
such a setting it is essential that the human agents can rapidly build trust in
the machine agents through appropriate transparency of their behaviour, e.g.,
through explanations. The human agents are also able to bring their local
knowledge to the team, observing the situation unfolding and deciding which key
information should be communicated to the machine agents to enable them to
better account for the particular environment. In this paper we describe the
initial steps towards this human-agent knowledge fusion (HAKF) environment
through a recap of the key requirements, and an explanation of how these can be
fulfilled for an example situation. We show how HAKF has the potential to bring
value to both human and machine agents working as part of a distributed
coalition team in a complex event processing setting with uncertain sources.
Related papers
- Operational Collective Intelligence of Humans and Machines [7.8074313693407635]
We explore the use of aggregative crowdsourced forecasting (ACF) as a mechanism to help operationalize collective intelligence''
This research asks whether ACF, as a key way to enable Operational Collective Intelligence, could be brought to bear on operational scenarios.
arXiv Detail & Related papers (2024-02-16T22:45:09Z) - Tell Me More! Towards Implicit User Intention Understanding of Language
Model Driven Agents [110.25679611755962]
Current language model-driven agents often lack mechanisms for effective user participation, which is crucial given the vagueness commonly found in user instructions.
We introduce Intention-in-Interaction (IN3), a novel benchmark designed to inspect users' implicit intentions through explicit queries.
We empirically train Mistral-Interact, a powerful model that proactively assesses task vagueness, inquires user intentions, and refines them into actionable goals.
arXiv Detail & Related papers (2024-02-14T14:36:30Z) - AgentCF: Collaborative Learning with Autonomous Language Agents for
Recommender Systems [112.76941157194544]
We propose AgentCF for simulating user-item interactions in recommender systems through agent-based collaborative filtering.
We creatively consider not only users but also items as agents, and develop a collaborative learning approach that optimize both kinds of agents together.
Overall, the optimized agents exhibit diverse interaction behaviors within our framework, including user-item, user-user, item-item, and collective interactions.
arXiv Detail & Related papers (2023-10-13T16:37:14Z) - Optimizing delegation between human and AI collaborative agents [1.6114012813668932]
We train a delegating manager agent to make delegation decisions with respect to potential performance deficiencies.
Our framework learns through observations of team performance without restricting agents to matching dynamics.
Our results show our manager learns to perform delegation decisions with teams of agents operating under differing representations of the environment.
arXiv Detail & Related papers (2023-09-26T07:23:26Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans [13.637799815698559]
We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action.
We show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
arXiv Detail & Related papers (2023-07-07T03:05:34Z) - Flexible and Inherently Comprehensible Knowledge Representation for
Data-Efficient Learning and Trustworthy Human-Machine Teaming in
Manufacturing Environments [0.0]
Trustworthiness of artificially intelligent agents is vital for the acceptance of human-machine teaming in industrial manufacturing environments.
We make use of G"ardenfors's cognitively inspired Conceptual Space framework to represent the agent's knowledge.
A simple typicality model is built on top of it to determine fuzzy category membership and classify instances interpretably.
arXiv Detail & Related papers (2023-05-19T11:18:23Z) - Information is Power: Intrinsic Control via Information Capture [110.3143711650806]
We argue that a compact and general learning objective is to minimize the entropy of the agent's state visitation estimated using a latent state-space model.
This objective induces an agent to both gather information about its environment, corresponding to reducing uncertainty, and to gain control over its environment, corresponding to reducing the unpredictability of future world states.
arXiv Detail & Related papers (2021-12-07T18:50:42Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Contextual and Possibilistic Reasoning for Coalition Formation [0.9790236766474201]
This article addresses the question of how to find and evaluate coalitions among agents in multiagent systems.
We first compute the solution space for the formation of coalitions using a contextual reasoning approach.
Second, we model agents as contexts in Multi-Context Systems (MCS), and dependence relations among agents seeking to achieve their goals, as bridge rules.
Third, we systematically compute all potential coalitions using algorithms for MCS equilibria, and given a set of functional and non-functional requirements, we propose ways to select the best solutions.
arXiv Detail & Related papers (2020-06-19T11:59:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.