GraphText: Graph Reasoning in Text Space
- URL: http://arxiv.org/abs/2310.01089v1
- Date: Mon, 2 Oct 2023 11:03:57 GMT
- Title: GraphText: Graph Reasoning in Text Space
- Authors: Jianan Zhao, Le Zhuo, Yikang Shen, Meng Qu, Kai Liu, Michael
Bronstein, Zhaocheng Zhu, Jian Tang
- Abstract summary: GraphText is a framework that translates graphs into natural language.
GraphText can achieve on par with, or even surpassing, the performance of supervised-trained graph neural networks.
It paves the way for interactive graph reasoning, allowing both humans and LLMs to communicate with the model seamlessly using natural language.
- Score: 32.00258972022153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have gained the ability to assimilate human
knowledge and facilitate natural language interactions with both humans and
other LLMs. However, despite their impressive achievements, LLMs have not made
significant advancements in the realm of graph machine learning. This
limitation arises because graphs encapsulate distinct relational data, making
it challenging to transform them into natural language that LLMs understand. In
this paper, we bridge this gap with a novel framework, GraphText, that
translates graphs into natural language. GraphText derives a graph-syntax tree
for each graph that encapsulates both the node attributes and inter-node
relationships. Traversal of the tree yields a graph text sequence, which is
then processed by an LLM to treat graph tasks as text generation tasks.
Notably, GraphText offers multiple advantages. It introduces training-free
graph reasoning: even without training on graph data, GraphText with ChatGPT
can achieve on par with, or even surpassing, the performance of
supervised-trained graph neural networks through in-context learning (ICL).
Furthermore, GraphText paves the way for interactive graph reasoning, allowing
both humans and LLMs to communicate with the model seamlessly using natural
language. These capabilities underscore the vast, yet-to-be-explored potential
of LLMs in the domain of graph machine learning.
Related papers
- Graph Linearization Methods for Reasoning on Graphs with Large Language Models [25.3545522174459]
Graphs should be linearized to reflect certain properties of natural language text, such as local dependency and global alignment.
We develop several graph linearization methods based on graph centrality, degeneracy, and node relabeling schemes.
Our work introduces novel graph representations suitable for LLMs, contributing to the potential integration of graph machine learning with the trend of multi-modal processing.
arXiv Detail & Related papers (2024-10-25T11:51:37Z) - Hierarchical Compression of Text-Rich Graphs via Large Language Models [63.75293588479027]
Text-rich graphs are prevalent in data mining contexts like e-commerce and academic graphs.
This paper introduces Hierarchical Compression'' (HiCom), a novel method to align the capabilities of LLMs with the structure of text-rich graphs.
HiCom can outperform both GNNs and LLM backbones for node classification on e-commerce and citation graphs.
arXiv Detail & Related papers (2024-06-13T07:24:46Z) - Joint Embeddings for Graph Instruction Tuning [0.0]
This work explores the integration of the graph modality in Large Language Models (LLMs) for general graph instruction following tasks.
It aims at producing a deep learning model that enhances an underlying LLM with graph embeddings and trains it to understand them.
The approach performs significantly better than a graph to text approach and remains consistent even for larger graphs.
arXiv Detail & Related papers (2024-05-31T08:26:47Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs [60.71360240206726]
Large language models (LLMs) suffer from hallucinations, especially on knowledge-intensive tasks.
Existing works propose to augment LLMs with individual text units retrieved from external knowledge corpora.
We propose a framework called Graph Chain-of-thought (Graph-CoT) to augment LLMs with graphs by encouraging LLMs to reason on the graph iteratively.
arXiv Detail & Related papers (2024-04-10T15:41:53Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - Can Language Models Solve Graph Problems in Natural Language? [51.28850846990929]
Large language models (LLMs) are increasingly adopted for a variety of tasks with implicit graphical structures.
We propose NLGraph, a benchmark of graph-based problem solving simulating in natural language.
arXiv Detail & Related papers (2023-05-17T08:29:21Z) - Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via
Prompt Augmented by ChatGPT [10.879701971582502]
We aim to develop a large language model (LLM) with the reasoning ability on complex graph data.
Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools.
arXiv Detail & Related papers (2023-04-10T05:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.