ChatGraph: Chat with Your Graphs
- URL: http://arxiv.org/abs/2401.12672v1
- Date: Tue, 23 Jan 2024 11:29:19 GMT
- Title: ChatGraph: Chat with Your Graphs
- Authors: Yun Peng, Sen Lin, Qian Chen, Lyu Xu, Xiaojun Ren, Yafei Li, Jianliang
Xu
- Abstract summary: We propose a large language model (LLM)-based framework called ChatGraph.
With ChatGraph, users can interact with graphs through natural language, making it easier to use and more flexible than traditional approaches.
- Score: 24.344119857435178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph analysis is fundamental in real-world applications. Traditional
approaches rely on SPARQL-like languages or clicking-and-dragging interfaces to
interact with graph data. However, these methods either require users to
possess high programming skills or support only a limited range of graph
analysis functionalities. To address the limitations, we propose a large
language model (LLM)-based framework called ChatGraph. With ChatGraph, users
can interact with graphs through natural language, making it easier to use and
more flexible than traditional approaches. The core of ChatGraph lies in
generating chains of graph analysis APIs based on the understanding of the
texts and graphs inputted in the user prompts. To achieve this, ChatGraph
consists of three main modules: an API retrieval module that searches for
relevant APIs, a graph-aware LLM module that enables the LLM to comprehend
graphs, and an API chain-oriented finetuning module that guides the LLM in
generating API chains.
Related papers
- GraphSOS: Graph Sampling and Order Selection to Help LLMs Understand Graphs Better [13.742220809751627]
GraphSOS is a novel framework for converting graph data into natural language text.
It features an Order Selector Module to ensure proper serialization order of the graph and a Subgraph Sampling Module to sample subgraphs with better structure for better reasoning.
Experiments on multiple datasets for node classification and graph question-answering demonstrate that GraphSOS improves LLMs' performance and ability on graph tasks.
arXiv Detail & Related papers (2025-01-24T11:55:57Z) - Can LLMs Convert Graphs to Text-Attributed Graphs? [35.53046810556242]
We propose Topology-Aware Node description Synthesis (TANS) to convert existing graphs into text-attributed graphs.
We evaluate our TANS on text-rich, text-limited, and text-free graphs, demonstrating its applicability.
arXiv Detail & Related papers (2024-12-13T13:32:59Z) - What Do LLMs Need to Understand Graphs: A Survey of Parametric Representation of Graphs [69.48708136448694]
Large language models (LLMs) are reorganizing in the AI community for their expected reasoning and inference abilities.
We believe this kind of parametric representation of graphs, graph laws, can be a solution for making LLMs understand graph data as the input.
arXiv Detail & Related papers (2024-10-16T00:01:31Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - GraphText: Graph Reasoning in Text Space [32.00258972022153]
GraphText is a framework that translates graphs into natural language.
GraphText can achieve on par with, or even surpassing, the performance of supervised-trained graph neural networks.
It paves the way for interactive graph reasoning, allowing both humans and LLMs to communicate with the model seamlessly using natural language.
arXiv Detail & Related papers (2023-10-02T11:03:57Z) - Can Language Models Solve Graph Problems in Natural Language? [51.28850846990929]
Large language models (LLMs) are increasingly adopted for a variety of tasks with implicit graphical structures.
We propose NLGraph, a benchmark of graph-based problem solving simulating in natural language.
arXiv Detail & Related papers (2023-05-17T08:29:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.