TESS: A Multi-intent Parser for Conversational Multi-Agent Systems with
Decentralized Natural Language Understanding Models
- URL: http://arxiv.org/abs/2312.11828v1
- Date: Tue, 19 Dec 2023 03:39:23 GMT
- Title: TESS: A Multi-intent Parser for Conversational Multi-Agent Systems with
Decentralized Natural Language Understanding Models
- Authors: Burak Aksar, Yara Rizk and Tathagata Chakraborti
- Abstract summary: Multi-agent systems complicate the natural language understanding of user intents.
We propose an efficient parsing and orchestration pipeline algorithm to service multi-intent utterances from the user.
- Score: 6.470108226184637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chatbots have become one of the main pathways for the delivery of business
automation tools. Multi-agent systems offer a framework for designing chatbots
at scale, making it easier to support complex conversations that span across
multiple domains as well as enabling developers to maintain and expand their
capabilities incrementally over time. However, multi-agent systems complicate
the natural language understanding (NLU) of user intents, especially when they
rely on decentralized NLU models: some utterances (termed single intent) may
invoke a single agent while others (termed multi-intent) may explicitly invoke
multiple agents. Without correctly parsing multi-intent inputs, decentralized
NLU approaches will not achieve high prediction accuracy. In this paper, we
propose an efficient parsing and orchestration pipeline algorithm to service
multi-intent utterances from the user in the context of a multi-agent system.
Our proposed approach achieved comparable performance to competitive deep
learning models on three different datasets while being up to 48 times faster.
Related papers
- Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model [8.604654904400027]
We introduce CoALM (Conversational Agentic Language Model), a unified approach that integrates both conversational and agentic capabilities.
Using CoALM-IT, we train three models CoALM 8B, CoALM 70B, and CoALM 405B, which outperform top domain-specific models.
arXiv Detail & Related papers (2025-02-12T22:18:34Z) - AI Multi-Agent Interoperability Extension for Managing Multiparty Conversations [0.0]
This paper presents a novel extension to the existing Multi-Agent specifications of the Open Voice Initiative.
It introduces new concepts such as the Convener Agent, Floor-Shared Conversational Space, Floor Manager, Multi-Conversant Support, and mechanisms for handling Interruptions and Uninvited Agents.
These advancements are crucial for ensuring smooth, efficient, and secure interactions in scenarios where multiple AI agents need to collaborate, debate, or contribute to a discussion.
arXiv Detail & Related papers (2024-11-05T18:11:55Z) - Internet of Agents: Weaving a Web of Heterogeneous Agents for Collaborative Intelligence [79.5316642687565]
Existing multi-agent frameworks often struggle with integrating diverse capable third-party agents.
We propose the Internet of Agents (IoA), a novel framework that addresses these limitations.
IoA introduces an agent integration protocol, an instant-messaging-like architecture design, and dynamic mechanisms for agent teaming and conversation flow control.
arXiv Detail & Related papers (2024-07-09T17:33:24Z) - ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning [74.58666091522198]
We present a framework for intuitive robot programming by non-experts.
We leverage natural language prompts and contextual information from the Robot Operating System (ROS)
Our system integrates large language models (LLMs), enabling non-experts to articulate task requirements to the system through a chat interface.
arXiv Detail & Related papers (2024-06-28T08:28:38Z) - AgentScope: A Flexible yet Robust Multi-Agent Platform [66.64116117163755]
AgentScope is a developer-centric multi-agent platform with message exchange as its core communication mechanism.
The abundant syntactic tools, built-in agents and service functions, user-friendly interfaces for application demonstration and utility monitor, zero-code programming workstation, and automatic prompt tuning mechanism significantly lower the barriers to both development and deployment.
arXiv Detail & Related papers (2024-02-21T04:11:28Z) - SpeechAgents: Human-Communication Simulation with Multi-Modal
Multi-Agent Systems [53.94772445896213]
Large Language Model (LLM)-based multi-agent systems have demonstrated promising performance in simulating human society.
We propose SpeechAgents, a multi-modal LLM based multi-agent system designed for simulating human communication.
arXiv Detail & Related papers (2024-01-08T15:01:08Z) - AutoAgents: A Framework for Automatic Agent Generation [27.74332323317923]
AutoAgents is an innovative framework that adaptively generates and coordinates multiple specialized agents to build an AI team according to different tasks.
Our experiments on various benchmarks demonstrate that AutoAgents generates more coherent and accurate solutions than the existing multi-agent methods.
arXiv Detail & Related papers (2023-09-29T14:46:30Z) - Agents: An Open-source Framework for Autonomous Language Agents [98.91085725608917]
We consider language agents as a promising direction towards artificial general intelligence.
We release Agents, an open-source library with the goal of opening up these advances to a wider non-specialist audience.
arXiv Detail & Related papers (2023-09-14T17:18:25Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
MADiff is a diffusion-based multi-agent learning framework.
It works as both a decentralized policy and a centralized controller.
Our experiments demonstrate that MADiff outperforms baseline algorithms across various multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.