Cogment: Open Source Framework For Distributed Multi-actor Training,
Deployment & Operations
- URL: http://arxiv.org/abs/2106.11345v1
- Date: Mon, 21 Jun 2021 18:21:26 GMT
- Title: Cogment: Open Source Framework For Distributed Multi-actor Training,
Deployment & Operations
- Authors: AI Redefined, Sai Krishna Gottipati, Sagar Kurandwad, Clod\'eric Mars,
Gregory Szriftgiser and Fran\c{c}ois Chabot
- Abstract summary: Involving humans directly for the benefit of AI agents' training is getting traction.
We present Cogment, a unifying open-source framework that introduces an actor formalism to support a variety of humans-agents collaboration typologies.
- Score: 0.3552336242617915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Involving humans directly for the benefit of AI agents' training is getting
traction thanks to several advances in reinforcement learning and
human-in-the-loop learning. Humans can provide rewards to the agent,
demonstrate tasks, design a curriculum, or act in the environment, but these
benefits also come with architectural, functional design and engineering
complexities. We present Cogment, a unifying open-source framework that
introduces an actor formalism to support a variety of humans-agents
collaboration typologies and training approaches. It is also scalable out of
the box thanks to a distributed micro service architecture, and offers
solutions to the aforementioned complexities.
Related papers
- AssistantX: An LLM-Powered Proactive Assistant in Collaborative Human-Populated Environment [15.475084260674384]
AssistantX is a proactive assistant designed to operate autonomously in a physical office environment.
Unlike conventional service robots, AssistantX leverages a novel multi-agent architecture, PPDR4X, which provides advanced inference capabilities.
Our evaluation highlights the architecture's effectiveness, showing that AssistantX can respond to clear instructions, actively retrieve supplementary information from memory, and proactively seek collaboration from team members to ensure successful task completion.
arXiv Detail & Related papers (2024-09-26T09:06:56Z) - BMW Agents -- A Framework For Task Automation Through Multi-Agent Collaboration [0.0]
We focus on designing a flexible agent engineering framework capable of handling complex use case applications across various domains.
The proposed framework provides reliability in industrial applications and presents techniques to ensure a scalable, flexible, and collaborative workflow for multiple autonomous agents.
arXiv Detail & Related papers (2024-06-28T16:39:20Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - An Interactive Agent Foundation Model [49.77861810045509]
We propose an Interactive Agent Foundation Model that uses a novel multi-task agent training paradigm for training AI agents.
Our training paradigm unifies diverse pre-training strategies, including visual masked auto-encoders, language modeling, and next-action prediction.
We demonstrate the performance of our framework across three separate domains -- Robotics, Gaming AI, and Healthcare.
arXiv Detail & Related papers (2024-02-08T18:58:02Z) - Agent Lumos: Unified and Modular Training for Open-Source Language Agents [89.78556964988852]
We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents.
LUMOS features a learnable, unified, and modular architecture with a planning module that learns high-level subgoal generation.
We collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales.
arXiv Detail & Related papers (2023-11-09T00:30:13Z) - AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z) - Learning Reward Machines in Cooperative Multi-Agent Tasks [75.79805204646428]
This paper presents a novel approach to Multi-Agent Reinforcement Learning (MARL)
It combines cooperative task decomposition with the learning of reward machines (RMs) encoding the structure of the sub-tasks.
The proposed method helps deal with the non-Markovian nature of the rewards in partially observable environments.
arXiv Detail & Related papers (2023-03-24T15:12:28Z) - A Unified Architecture for Dynamic Role Allocation and Collaborative
Task Planning in Mixed Human-Robot Teams [0.0]
We present a novel architecture for dynamic role allocation and collaborative task planning in a mixed human-robot team of arbitrary size.
The architecture capitalizes on a centralized reactive and modular task-agnostic planning method based on Behavior Trees (BTs)
Different metrics used as MILP cost allow the architecture to favor various aspects of the collaboration.
arXiv Detail & Related papers (2023-01-19T12:30:56Z) - The AI Arena: A Framework for Distributed Multi-Agent Reinforcement
Learning [0.3437656066916039]
We introduce the AI Arena: a scalable framework with flexible abstractions for distributed multi-agent reinforcement learning.
We show performance gains due to a distributed multi-agent learning approach over commonly-used RL techniques in several different learning environments.
arXiv Detail & Related papers (2021-03-09T22:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.