Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization
- URL: http://arxiv.org/abs/2310.02170v1
- Date: Tue, 3 Oct 2023 16:05:48 GMT
- Title: Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization
- Authors: Zijun Liu, Yanzhe Zhang, Peng Li, Yang Liu, Diyi Yang
- Abstract summary: Large language model (LLM) agents have been shown effective on a wide range of tasks, and by ensembling multiple LLM agents, their performances could be further improved.
Existing approaches employ a fixed set of agents to interact with each other in a static architecture.
We build a framework named Dynamic LLM-Agent Network ($textbfDyLAN$) for LLM-agent collaboration on complicated tasks like reasoning and code generation.
- Score: 59.39113350538332
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language model (LLM) agents have been shown effective on a wide range
of tasks, and by ensembling multiple LLM agents, their performances could be
further improved. Existing approaches employ a fixed set of agents to interact
with each other in a static architecture, which limits their generalizability
to various tasks and requires strong human prior in designing these agents. In
this work, we propose to construct a strategic team of agents communicating in
a dynamic interaction architecture based on the task query. Specifically, we
build a framework named Dynamic LLM-Agent Network ($\textbf{DyLAN}$) for
LLM-agent collaboration on complicated tasks like reasoning and code
generation. DyLAN enables agents to interact for multiple rounds in a dynamic
architecture with inference-time agent selection and an early-stopping
mechanism to improve performance and efficiency. We further design an automatic
agent team optimization algorithm based on an unsupervised metric termed
$\textit{Agent Importance Score}$, enabling the selection of best agents based
on the contribution each agent makes. Empirically, we demonstrate that DyLAN
performs well in both reasoning and code generation tasks with reasonable
computational cost. DyLAN achieves 13.0% and 13.3% improvement on MATH and
HumanEval, respectively, compared to a single execution on GPT-35-turbo. On
specific subjects of MMLU, agent team optimization in DyLAN increases accuracy
by up to 25.0%.
Related papers
- MorphAgent: Empowering Agents through Self-Evolving Profiles and Decentralized Collaboration [8.078098082305575]
This paper introduces MorphAgent, a novel framework for decentralized multi-agent collaboration.
MorphAgent employs self-evolving agent profiles, optimized through three key metrics.
Our experimental results show that MorphAgent outperforms traditional static-role MAS in terms of task performance and adaptability to changing requirements.
arXiv Detail & Related papers (2024-10-19T09:10:49Z) - ComfyBench: Benchmarking LLM-based Agents in ComfyUI for Autonomously Designing Collaborative AI Systems [80.69865295743149]
This work attempts to study using LLM-based agents to design collaborative AI systems autonomously.
Based on ComfyBench, we develop ComfyAgent, a framework that empowers agents to autonomously design collaborative AI systems by generating.
While ComfyAgent achieves a comparable resolve rate to o1-preview and significantly surpasses other agents on ComfyBench, ComfyAgent has resolved only 15% of creative tasks.
arXiv Detail & Related papers (2024-09-02T17:44:10Z) - Optimizing Collaboration of LLM based Agents for Finite Element Analysis [1.5039745292757671]
This paper investigates the interactions between multiple agents within Large Language Models (LLMs) in the context of programming and coding tasks.
We utilize the AutoGen framework to facilitate communication among agents, evaluating different configurations based on the success rates from 40 random runs for each setup.
arXiv Detail & Related papers (2024-08-23T23:11:08Z) - On the Resilience of LLM-Based Multi-Agent Collaboration with Faulty Agents [58.79302663733703]
Large language model-based multi-agent systems have shown great abilities across various tasks due to the collaboration of expert agents.
However, the impact of clumsy or even malicious agents, on the overall performance of the system remains underexplored.
This paper investigates what is the resilience of various system structures under faulty agents.
arXiv Detail & Related papers (2024-08-02T03:25:20Z) - Adaptive In-conversation Team Building for Language Model Agents [33.03550687362213]
Leveraging multiple large language model (LLM) agents has shown to be a promising approach for tackling complex tasks.
Our new adaptive team-building paradigm offers a flexible solution, realized through a novel agent design named Captain Agent.
A comprehensive evaluation across six real-world scenarios demonstrates that Captain Agent significantly outperforms existing multi-agent methods.
arXiv Detail & Related papers (2024-05-29T18:08:37Z) - Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models [56.00992369295851]
Open-sourced Large Language Models (LLMs) have achieved great success in various NLP tasks, however, they are still far inferior to API-based models when acting as agents.
This paper delivers three key observations: (1) the current agent training corpus is entangled with both formats following and agent reasoning, which significantly shifts from the distribution of its pre-training data; (2) LLMs exhibit different learning speeds on the capabilities required by agent tasks; and (3) current approaches have side-effects when improving agent abilities by introducing hallucinations.
We propose Agent-FLAN to effectively Fine-tune LANguage models for Agents.
arXiv Detail & Related papers (2024-03-19T16:26:10Z) - Learning to Use Tools via Cooperative and Interactive Agents [58.77710337157665]
Tool learning empowers large language models (LLMs) as agents to use external tools and extend their utility.
We propose ConAgents, a Cooperative and interactive Agents framework, which coordinates three specialized agents for tool selection, tool execution, and action calibration separately.
Our experiments on three datasets show that the LLMs, when equipped with ConAgents, outperform baselines with substantial improvement.
arXiv Detail & Related papers (2024-03-05T15:08:16Z) - Agents meet OKR: An Object and Key Results Driven Agent System with
Hierarchical Self-Collaboration and Self-Evaluation [25.308341461293857]
OKR-Agent is designed to enhance the capabilities of Large Language Models (LLMs) in task-solving.
Our framework includes two novel modules: hierarchical Objects and Key Results generation and multi-level evaluation.
arXiv Detail & Related papers (2023-11-28T06:16:30Z) - AutoAgents: A Framework for Automatic Agent Generation [27.74332323317923]
AutoAgents is an innovative framework that adaptively generates and coordinates multiple specialized agents to build an AI team according to different tasks.
Our experiments on various benchmarks demonstrate that AutoAgents generates more coherent and accurate solutions than the existing multi-agent methods.
arXiv Detail & Related papers (2023-09-29T14:46:30Z) - Multi-agent Deep Covering Skill Discovery [50.812414209206054]
We propose Multi-agent Deep Covering Option Discovery, which constructs the multi-agent options through minimizing the expected cover time of the multiple agents' joint state space.
Also, we propose a novel framework to adopt the multi-agent options in the MARL process.
We show that the proposed algorithm can effectively capture the agent interactions with the attention mechanism, successfully identify multi-agent options, and significantly outperforms prior works using single-agent options or no options.
arXiv Detail & Related papers (2022-10-07T00:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.