Position: Towards a Responsible LLM-empowered Multi-Agent Systems
- URL: http://arxiv.org/abs/2502.01714v1
- Date: Mon, 03 Feb 2025 16:04:30 GMT
- Title: Position: Towards a Responsible LLM-empowered Multi-Agent Systems
- Authors: Jinwei Hu, Yi Dong, Shuang Ao, Zhuoyun Li, Boxuan Wang, Lokesh Singh, Guangliang Cheng, Sarvapali D. Ramchurn, Xiaowei Huang,
- Abstract summary: The rise of Agent AI and Large Language Model-powered Multi-Agent Systems (LLM-MAS) has underscored the need for responsible and dependable system operation.
These advancements introduce critical challenges: LLM agents exhibit inherent unpredictability, and uncertainties in their outputs can compound, threatening system stability.
To address these risks, a human-centered design approach with active dynamic moderation is essential.
- Score: 22.905804138387854
- License:
- Abstract: The rise of Agent AI and Large Language Model-powered Multi-Agent Systems (LLM-MAS) has underscored the need for responsible and dependable system operation. Tools like LangChain and Retrieval-Augmented Generation have expanded LLM capabilities, enabling deeper integration into MAS through enhanced knowledge retrieval and reasoning. However, these advancements introduce critical challenges: LLM agents exhibit inherent unpredictability, and uncertainties in their outputs can compound across interactions, threatening system stability. To address these risks, a human-centered design approach with active dynamic moderation is essential. Such an approach enhances traditional passive oversight by facilitating coherent inter-agent communication and effective system governance, allowing MAS to achieve desired outcomes more efficiently.
Related papers
- G-Safeguard: A Topology-Guided Security Lens and Treatment on LLM-based Multi-agent Systems [10.450573905691677]
Large Language Model (LLM)-based Multi-agent Systems (MAS) have demonstrated remarkable capabilities in various complex tasks.
As these systems become increasingly integrated into critical applications, their vulnerability to adversarial attacks, misinformation propagation, and unintended behaviors have raised significant concerns.
We introduce G-Safeguard, a topology-guided security lens and treatment for robust MAS.
arXiv Detail & Related papers (2025-02-16T13:48:41Z) - LMAgent: A Large-scale Multimodal Agents Society for Multi-user Simulation [66.52371505566815]
Large language models (LLMs)-based AI agents have made significant progress, enabling them to achieve human-like intelligence.
We present LMAgent, a very large-scale and multimodal agents society based on multimodal LLMs.
In LMAgent, besides chatting with friends, the agents can autonomously browse, purchase, and review products, even perform live streaming e-commerce.
arXiv Detail & Related papers (2024-12-12T12:47:09Z) - From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - A Study on Prompt Injection Attack Against LLM-Integrated Mobile Robotic Systems [4.71242457111104]
Large Language Models (LLMs) can process multi-modal prompts, enabling them to generate more context-aware responses.
One of the primary concerns is the potential security risks associated with using LLMs in robotic navigation tasks.
This study investigates the impact of prompt injections on mobile robot performance in LLM-integrated systems.
arXiv Detail & Related papers (2024-08-07T02:48:22Z) - EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms [55.77492625524141]
EvoAgent is a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm.
We show that EvoAgent can automatically generate multiple expert agents and significantly enhance the task-solving capabilities of LLM-based agents.
arXiv Detail & Related papers (2024-06-20T11:49:23Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Controlling Large Language Model-based Agents for Large-Scale
Decision-Making: An Actor-Critic Approach [28.477463632107558]
We develop a modular framework called LLaMAC to address hallucination in Large Language Models and coordination in Multi-Agent Systems.
LLaMAC implements a value distribution encoding similar to that found in the human brain, utilizing internal and external feedback mechanisms to facilitate collaboration and iterative reasoning among its modules.
arXiv Detail & Related papers (2023-11-23T10:14:58Z) - Balancing Autonomy and Alignment: A Multi-Dimensional Taxonomy for
Autonomous LLM-powered Multi-Agent Architectures [0.0]
Large language models (LLMs) have revolutionized the field of artificial intelligence, endowing it with sophisticated language understanding and generation capabilities.
This paper proposes a comprehensive multi-dimensional taxonomy to analyze how autonomous LLM-powered multi-agent systems balance the dynamic interplay between autonomy and alignment.
arXiv Detail & Related papers (2023-10-05T16:37:29Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Self-Adaptive Large Language Model (LLM)-Based Multiagent Systems [0.0]
We propose the integration of large language models (LLMs) into multiagent systems.
We anchor our methodology on the MAPE-K model, which is renowned for its robust support in monitoring, analyzing, planning, and executing system adaptations.
arXiv Detail & Related papers (2023-07-12T14:26:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.