Language Agents as Optimizable Graphs
- URL: http://arxiv.org/abs/2402.16823v3
- Date: Thu, 22 Aug 2024 13:06:51 GMT
- Title: Language Agents as Optimizable Graphs
- Authors: Mingchen Zhuge, Wenyi Wang, Louis Kirsch, Francesco Faccio, Dmitrii Khizbullin, Jürgen Schmidhuber,
- Abstract summary: We describe Large Language Models (LLMs)-based agents as computational graphs.
Our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents.
- Score: 31.220547147952278
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Various human-designed prompt engineering techniques have been proposed to improve problem solvers based on Large Language Models (LLMs), yielding many disparate code bases. We unify these approaches by describing LLM-based agents as computational graphs. The nodes implement functions to process multimodal data or query LLMs, and the edges describe the information flow between operations. Graphs can be recursively combined into larger composite graphs representing hierarchies of inter-agent collaboration (where edges connect operations of different agents). Our novel automatic graph optimizers (1) refine node-level LLM prompts (node optimization) and (2) improve agent orchestration by changing graph connectivity (edge optimization). Experiments demonstrate that our framework can be used to efficiently develop, integrate, and automatically improve various LLM agents. The code can be found at https://github.com/metauto-ai/gptswarm.
Related papers
- SocialGPT: Prompting LLMs for Social Relation Reasoning via Greedy Segment Optimization [70.11167263638562]
Social relation reasoning aims to identify relation categories such as friends, spouses, and colleagues from images.
We first present a simple yet well-crafted framework named name, which combines the perception capability of Vision Foundation Models (VFMs) and the reasoning capability of Large Language Models (LLMs) within a modular framework.
arXiv Detail & Related papers (2024-10-28T18:10:26Z) - GraphTeam: Facilitating Large Language Model-based Graph Analysis via Multi-Agent Collaboration [46.663380413396226]
GraphTeam consists of five LLM-based agents from three modules, and the agents with different specialities can collaborate to address complex problems.
Experiments on six graph analysis benchmarks demonstrate that GraphTeam achieves state-of-the-art performance with an average 25.85% improvement over the best baseline in terms of accuracy.
arXiv Detail & Related papers (2024-10-23T17:02:59Z) - NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models [26.739650151993928]
Graphs are a fundamental data structure for representing relationships in real-world scenarios.
Applying Large Language Models (LLMs) to graph-related tasks poses significant challenges.
We introduce Node Tokenizer for Large Language Models (NT-LLM), a novel framework that efficiently encodes graph structures.
arXiv Detail & Related papers (2024-10-14T17:21:57Z) - Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning [28.660326096652437]
We introduce AskGNN, a novel approach that bridges the gap between sequential text processing and graph-structured data.
AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs.
Experiments across three tasks and seven LLMs demonstrate AskGNN's superior effectiveness in graph task performance.
arXiv Detail & Related papers (2024-10-09T17:19:12Z) - GraphRouter: A Graph-based Router for LLM Selections [13.463815950807874]
Graph is a graph-based approach for the contextual and adaptive selection of Large Language Models.
We show that Graph substantially surpasses existing routers, delivering a minimum performance improvement of 12.3%.
This work achieves a graph-based approach for the contextual and adaptive selection of LLMs, offering insights for real-world applications.
arXiv Detail & Related papers (2024-10-04T18:02:48Z) - Language Models are Graph Learners [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, including Graph Neural Networks (GNNs) and Graph Transformers (GTs)
We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Integrating Graphs with Large Language Models: Methods and Prospects [68.37584693537555]
Large language models (LLMs) have emerged as frontrunners, showcasing unparalleled prowess in diverse applications.
Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest.
This paper bifurcates such integrations into two predominant categories.
arXiv Detail & Related papers (2023-10-09T07:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.