Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming
- URL: http://arxiv.org/abs/2203.12955v1
- Date: Thu, 24 Mar 2022 09:36:50 GMT
- Title: Onto4MAT: A Swarm Shepherding Ontology for Generalised Multi-Agent
Teaming
- Authors: Adam J. Hepworth and Daniel P. Baxter and Hussein A. Abbass
- Abstract summary: We provide a formal knowledge representation design that enables the swarm Artificial Intelligence to reason about its environment and system.
We propose the Ontology for Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming between humans and teams.
- Score: 2.9327503320877457
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Research in multi-agent teaming has increased substantially over recent
years, with knowledge-based systems to support teaming processes typically
focused on delivering functional (communicative) solutions for a team to act
meaningfully in response to direction. Enabling humans to effectively interact
and team with a swarm of autonomous cognitive agents is an open research
challenge in Human-Swarm Teaming research, partially due to the focus on
developing the enabling architectures to support these systems. Typically,
bi-directional transparency and shared semantic understanding between agents
has not prioritised a designed mechanism in Human-Swarm Teaming, potentially
limiting how a human and a swarm team can share understanding and
information\textemdash data through concepts and contexts\textemdash to achieve
a goal. To address this, we provide a formal knowledge representation design
that enables the swarm Artificial Intelligence to reason about its environment
and system, ultimately achieving a shared goal. We propose the Ontology for
Generalised Multi-Agent Teaming, Onto4MAT, to enable more effective teaming
between humans and teams through the biologically-inspired approach of
shepherding.
Related papers
- Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Cooperation on the Fly: Exploring Language Agents for Ad Hoc Teamwork in
the Avalon Game [25.823665278297057]
This study focuses on the ad hoc teamwork problem where the agent operates in an environment driven by natural language.
Our findings reveal the potential of LLM agents in team collaboration, highlighting issues related to hallucinations in communication.
To address this issue, we develop CodeAct, a general agent that equips LLM with enhanced memory and code-driven reasoning.
arXiv Detail & Related papers (2023-12-29T08:26:54Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Improving Grounded Language Understanding in a Collaborative Environment
by Interacting with Agents Through Help Feedback [42.19685958922537]
We argue that human-AI collaboration should be interactive, with humans monitoring the work of AI agents and providing feedback that the agent can understand and utilize.
In this work, we explore these directions using the challenging task defined by the IGLU competition, an interactive grounded language understanding task in a MineCraft-like world.
arXiv Detail & Related papers (2023-04-21T05:37:59Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - ToM2C: Target-oriented Multi-agent Communication and Cooperation with
Theory of Mind [18.85252946546942]
Theory of Mind (ToM) builds socially intelligent agents who are able to communicate and cooperate effectively.
We demonstrate the idea in two typical target-oriented multi-agent tasks: cooperative navigation and multi-sensor target coverage.
arXiv Detail & Related papers (2021-10-15T18:29:55Z) - Interpretation of Emergent Communication in Heterogeneous Collaborative
Embodied Agents [83.52684405389445]
We introduce the collaborative multi-object navigation task CoMON.
In this task, an oracle agent has detailed environment information in the form of a map.
It communicates with a navigator agent that perceives the environment visually and is tasked to find a sequence of goals.
We show that the emergent communication can be grounded to the agent observations and the spatial structure of the 3D environment.
arXiv Detail & Related papers (2021-10-12T06:56:11Z) - From Motor Control to Team Play in Simulated Humanoid Football [56.86144022071756]
We train teams of physically simulated humanoid avatars to play football in a realistic virtual environment.
In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements.
They then acquire mid-level football skills such as dribbling and shooting.
Finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds.
arXiv Detail & Related papers (2021-05-25T20:17:10Z) - Teaming up with information agents [0.0]
Our aim is to study how humans can collaborate with information agents.
We propose some appropriate team design patterns, and test them using our Collaborative Intelligence Analysis (CIA) tool.
arXiv Detail & Related papers (2021-01-15T14:26:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.