Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming
- URL: http://arxiv.org/abs/2203.12955v1
- Date: Thu, 24 Mar 2022 09:36:50 GMT
- Title: Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming
- Authors: Adam J. Hepworth and Daniel P. Baxter and Hussein A. Abbass
- Abstract summary: We provide a formal knowledge representation design that enables the swarm Artificial Intelligence to reason about its environment and system.
We propose the Ontology for Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming between humans and teams.
- Score: 2.9327503320877457
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Research in multi-agent teaming has increased substantially over recent
years, with knowledge-based systems to support teaming processes typically
focused on delivering functional (communicative) solutions for a team to act
meaningfully in response to direction. Enabling humans to effectively interact
and team with a swarm of autonomous cognitive agents is an open research
challenge in Human-Swarm Teaming research, partially due to the focus on
developing the enabling architectures to support these systems. Typically,
bi-directional transparency and shared semantic understanding between agents
has not prioritised a designed mechanism in Human-Swarm Teaming, potentially
limiting how a human and a swarm team can share understanding and
information\textemdash data through concepts and contexts\textemdash to achieve
a goal. To address this, we provide a formal knowledge representation design
that enables the swarm Artificial Intelligence to reason about its environment
and system, ultimately achieving a shared goal. We propose the Ontology for
Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming
between humans and teams through the biologically-inspired approach of
shepherding.
Related papers
- Mutual Theory of Mind in Human-AI Collaboration: An Empirical Study with LLM-driven AI Agents in a Real-time Shared Workspace Task [56.92961847155029]
Theory of Mind (ToM) significantly impacts human collaboration and communication as a crucial capability to understand others.
Mutual Theory of Mind (MToM) arises when AI agents with ToM capability collaborate with humans.
We find that the agent's ToM capability does not significantly impact team performance but enhances human understanding of the agent.
arXiv Detail & Related papers (2024-09-13T13:19:48Z) - CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind [18.85252946546942]
Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
arXiv Detail & Related papers (2021-10-15T18:29:55Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - From Motor Control to Team Play in Simulated Humanoid Football [56.86144022071756]
We train teams of physically simulated humanoid avatars to play football in a realistic virtual environment.
In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements.
They then acquire mid-level football skills such as dribbling and shooting.
Finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds.
arXiv Detail & Related papers (2021-05-25T20:17:10Z) - Teaming up with information agents [0.0]
Our aim is to study how humans can collaborate with information agents.
We propose some appropriate team design patterns, and test them using our Collaborative Intelligence Analysis (CIA) tool.
arXiv Detail & Related papers (2021-01-15T14:26:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.