Language Agents as Optimizable Graphs
- URL: http://arxiv.org/abs/2402.16823v2
- Date: Tue, 27 Feb 2024 11:03:10 GMT
- Title: Language Agents as Optimizable Graphs
- Authors: Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii
Khizbullin and J\"urgen Schmidhuber
- Abstract summary: We describe Large Language Models (LLMs)-based agents as computational graphs.
Our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents.
- Score: 13.544946158630536
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Various human-designed prompt engineering techniques have been proposed to
improve problem solvers based on Large Language Models (LLMs), yielding many
disparate code bases. We unify these approaches by describing LLM-based agents
as computational graphs. The nodes implement functions to process multimodal
data or query LLMs, and the edges describe the information flow between
operations. Graphs can be recursively combined into larger composite graphs
representing hierarchies of inter-agent collaboration (where edges connect
operations of different agents). Our novel automatic graph optimizers (1)
refine node-level LLM prompts (node optimization) and (2) improve agent
orchestration by changing graph connectivity (edge optimization). Experiments
demonstrate that our framework can be used to efficiently develop, integrate,
and automatically improve various LLM agents. The code can be found at
https://github.com/metauto-ai/gptswarm.
Related papers
- All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Input Conditioned Graph Generation for Language Agents [31.2175071107555]
We develop learnable and dynamic language agents using an existing framework that abstracts language agents as graphs.
We learn to generate edges that represent the flow of communication based on the given input, thereby adjusting the internal communication of a language agent.
Our approach surpasses the previous static approach by nearly 6% accuracy on a combined dataset of MMLU and CMMLU, and by more than 10% when trained with a sparsity-inducing loss.
arXiv Detail & Related papers (2024-06-17T13:53:15Z) - AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval [93.96463520716759]
Large language model (LLM) agents have demonstrated impressive capability in utilizing external tools and knowledge to boost accuracy and reduce hallucinations.
Here, we introduce AvaTaR, a novel framework that optimize an LLM agent to effectively use the provided tools and improve its performance on a given task/domain.
We find AvaTaR consistently outperforms state-of-the-art approaches across all four challenging tasks and exhibits strong generalization ability when applied to novel cases.
arXiv Detail & Related papers (2024-06-17T04:20:02Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Large Language Model with Graph Convolution for Recommendation [21.145230388035277]
Text information can sometimes be of low quality, hindering its effectiveness for real-world applications.
With knowledge and reasoning capabilities capsuled in Large Language Models, utilizing LLMs emerges as a promising way for description improvement.
We propose a Graph-aware Convolutional LLM method to elicit LLMs to capture high-order relations in the user-item graph.
arXiv Detail & Related papers (2024-02-14T00:04:33Z) - GraphTranslator: Aligning Graph Model to Large Language Model for
Open-ended Tasks [44.02825843494608]
Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities.
GraphTranslator aims to leverage GM to handle the pre-defined tasks effectively.
By translating node representation into tokens, GraphTranslator empowers an LLM to make predictions based on language instructions.
arXiv Detail & Related papers (2024-02-11T13:24:13Z) - Efficient Large Language Models Fine-Tuning On Graphs [23.19795835873144]
Learning from Text-Attributed Graphs (TAGs) has attracted significant attention due to its wide range of real-world applications.
We introduce a novel and efficient approach for the end-to-end fine-tuning of Large Language Models (LLMs) on TAGs, named LEADING.
arXiv Detail & Related papers (2023-12-07T22:35:16Z) - Integrating Graphs with Large Language Models: Methods and Prospects [68.37584693537555]
Large language models (LLMs) have emerged as frontrunners, showcasing unparalleled prowess in diverse applications.
Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest.
This paper bifurcates such integrations into two predominant categories.
arXiv Detail & Related papers (2023-10-09T07:59:34Z) - Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with
Agent Team Optimization [59.39113350538332]
Large language model (LLM) agents have been shown effective on a wide range of tasks, and by ensembling multiple LLM agents, their performances could be further improved.
Existing approaches employ a fixed set of agents to interact with each other in a static architecture.
We build a framework named Dynamic LLM-Agent Network ($textbfDyLAN$) for LLM-agent collaboration on complicated tasks like reasoning and code generation.
arXiv Detail & Related papers (2023-10-03T16:05:48Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.