KwaiAgents: Generalized Information-seeking Agent System with Large
Language Models
- URL: http://arxiv.org/abs/2312.04889v3
- Date: Wed, 10 Jan 2024 09:44:45 GMT
- Title: KwaiAgents: Generalized Information-seeking Agent System with Large
Language Models
- Authors: Haojie Pan, Zepeng Zhai, Hao Yuan, Yaojia Lv, Ruiji Fu, Ming Liu,
Zhongyuan Wang, Bing Qin
- Abstract summary: Humans excel in critical thinking, planning, reflection, and harnessing available tools to interact with and interpret the world.
Recent advancements in large language models (LLMs) suggest that machines might also possess the aforementioned human-like capabilities.
We introduce KwaiAgents, a generalized information-seeking agent system based on LLMs.
- Score: 33.59597020276034
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Driven by curiosity, humans have continually sought to explore and understand
the world around them, leading to the invention of various tools to satiate
this inquisitiveness. Despite not having the capacity to process and memorize
vast amounts of information in their brains, humans excel in critical thinking,
planning, reflection, and harnessing available tools to interact with and
interpret the world, enabling them to find answers efficiently. The recent
advancements in large language models (LLMs) suggest that machines might also
possess the aforementioned human-like capabilities, allowing them to exhibit
powerful abilities even with a constrained parameter count. In this paper, we
introduce KwaiAgents, a generalized information-seeking agent system based on
LLMs. Within KwaiAgents, we propose an agent system that employs LLMs as its
cognitive core, which is capable of understanding a user's query, behavior
guidelines, and referencing external documents. The agent can also update and
retrieve information from its internal memory, plan and execute actions using a
time-aware search-browse toolkit, and ultimately provide a comprehensive
response. We further investigate the system's performance when powered by LLMs
less advanced than GPT-4, and introduce the Meta-Agent Tuning (MAT) framework,
designed to ensure even an open-sourced 7B or 13B model performs well among
many agent systems. We exploit both benchmark and human evaluations to
systematically validate these capabilities. Extensive experiments show the
superiority of our agent system compared to other autonomous agents and
highlight the enhanced generalized agent-abilities of our fine-tuned LLMs.
Related papers
- Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance [95.03771007780976]
We tackle the challenge of developing proactive agents capable of anticipating and initiating tasks without explicit human instructions.
First, we collect real-world human activities to generate proactive task predictions.
These predictions are labeled by human annotators as either accepted or rejected.
The labeled data is used to train a reward model that simulates human judgment.
arXiv Detail & Related papers (2024-10-16T08:24:09Z) - Gödel Agent: A Self-Referential Agent Framework for Recursive Self-Improvement [117.94654815220404]
G"odel Agent is a self-evolving framework inspired by the G"odel machine.
G"odel Agent can achieve continuous self-improvement, surpassing manually crafted agents in performance, efficiency, and generalizability.
arXiv Detail & Related papers (2024-10-06T10:49:40Z) - On the limits of agency in agent-based models [13.130587222524305]
Agent-based modeling (ABM) seeks to understand the behavior of complex systems by simulating a collection of agents that act and interact within an environment.
Recent advancements in large language models (LLMs) present an opportunity to enhance ABMs.
We introduce AgentTorch -- a framework that scales ABMs to millions of agents while capturing high-resolution agent behavior using LLMs.
arXiv Detail & Related papers (2024-09-14T04:17:24Z) - Automated Design of Agentic Systems [5.404186221463082]
We formulate a new research area, Automated Design of Agentic Systems, which aims to automatically create powerful agentic system designs.
We show that our algorithm can progressively invent agents with novel designs that greatly outperform state-of-the-art hand-designed agents.
arXiv Detail & Related papers (2024-08-15T21:59:23Z) - AgentGym: Evolving Large Language Model-based Agents across Diverse Environments [116.97648507802926]
Large language models (LLMs) are considered a promising foundation to build such agents.
We take the first step towards building generally-capable LLM-based agents with self-evolution ability.
We propose AgentGym, a new framework featuring a variety of environments and tasks for broad, real-time, uni-format, and concurrent agent exploration.
arXiv Detail & Related papers (2024-06-06T15:15:41Z) - Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning [56.82041895921434]
Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities.
When used as agents for dealing with complex problems in the real world, their performance is far inferior to large commercial models such as ChatGPT and GPT-4.
arXiv Detail & Related papers (2024-03-29T03:48:12Z) - KnowAgent: Knowledge-Augmented Planning for LLM-Based Agents [54.09074527006576]
Large Language Models (LLMs) have demonstrated great potential in complex reasoning tasks, yet they fall short when tackling more sophisticated challenges.
This inadequacy primarily stems from the lack of built-in action knowledge in language agents.
We introduce KnowAgent, a novel approach designed to enhance the planning capabilities of LLMs by incorporating explicit action knowledge.
arXiv Detail & Related papers (2024-03-05T16:39:12Z) - Exploring Large Language Model based Intelligent Agents: Definitions,
Methods, and Prospects [32.91556128291915]
This paper surveys current research to provide an in-depth overview of intelligent agents within single and multi-agent systems.
It covers their definitions, research frameworks, and foundational components such as their composition, cognitive and planning methods, tool utilization, and responses to environmental feedback.
We conclude by envisioning prospects for LLM-based agents, considering the evolving landscape of AI and natural language processing.
arXiv Detail & Related papers (2024-01-07T09:08:24Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.