When Structure Doesn't Help: LLMs Do Not Read Text-Attributed Graphs as Effectively as We Expected
- URL: http://arxiv.org/abs/2511.16767v1
- Date: Thu, 20 Nov 2025 19:34:58 GMT
- Title: When Structure Doesn't Help: LLMs Do Not Read Text-Attributed Graphs as Effectively as We Expected
- Authors: Haotian Xu, Yuning You, Tengfei Ma,
- Abstract summary: Large language models (LLMs) have excelled at understanding natural language and integrating cross-modal signals.<n>Recent work has explored how different strategies for encoding graph structure affect LLM performance on text-attributed graphs.<n>We show that explicit structural priors are often unnecessary and, in some cases, counterproductive when powerful language models are involved.
- Score: 10.031229573133709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs provide a unified representation of semantic content and relational structure, making them a natural fit for domains such as molecular modeling, citation networks, and social graphs. Meanwhile, large language models (LLMs) have excelled at understanding natural language and integrating cross-modal signals, sparking interest in their potential for graph reasoning. Recent work has explored this by either designing template-based graph templates or using graph neural networks (GNNs) to encode structural information. In this study, we investigate how different strategies for encoding graph structure affect LLM performance on text-attributed graphs. Surprisingly, our systematic experiments reveal that: (i) LLMs leveraging only node textual descriptions already achieve strong performance across tasks; and (ii) most structural encoding strategies offer marginal or even negative gains. We show that explicit structural priors are often unnecessary and, in some cases, counterproductive when powerful language models are involved. This represents a significant departure from traditional graph learning paradigms and highlights the need to rethink how structure should be represented and utilized in the LLM era. Our study is to systematically challenge the foundational assumption that structure is inherently beneficial for LLM-based graph reasoning, opening the door to new, semantics-driven approaches for graph learning.
Related papers
- GILT: An LLM-Free, Tuning-Free Graph Foundational Model for In-Context Learning [50.40400074353263]
Graph Neural Networks (GNNs) are powerful tools for precessing relational data but often struggle to generalize to unseen graphs.<n>We introduce textbfGraph textbfIn-context textbfL textbfTransformer (GILT), a framework built on an LLM-free and tuning-free architecture.
arXiv Detail & Related papers (2025-10-06T08:09:15Z) - G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge [88.82814893945077]
Large language models (LLMs) excel at complex reasoning but remain limited by static and incomplete parametric knowledge.<n>Recent graph-enhanced RAG (GraphRAG) attempts to bridge this gap by constructing tailored graphs and enabling LLMs to reason on them.<n>G-reasoner is a unified framework that integrates graph and language foundation models for reasoning over diverse graph-structured knowledge.
arXiv Detail & Related papers (2025-09-29T04:38:12Z) - Are Large Language Models In-Context Graph Learners? [31.172657860606297]
Large language models (LLMs) have remarkable in-context reasoning capabilities across a wide range of tasks.<n>However, they struggle to handle structured data, such as graphs, due to their lack of understanding of non-Euclidean structures.<n>We show that learning on graph data can be conceptualized as a retrieval-augmented generation (RAG) process.<n>We propose a series of RAG frameworks to enhance the in-context learning capabilities of LLMs for graph learning tasks.
arXiv Detail & Related papers (2025-02-19T09:14:19Z) - Each Graph is a New Language: Graph Learning with LLMs [9.22463167477865]
We present textbfGraph-textbfDefined textbfLanguage for textbfLarge textbfLanguage textbfModel (GDL4LLM) to transfer powerful language understanding capabilities to graph-structured data.<n>GDL4LLM translates graphs into a graph language corpus instead of graph descriptions and pre-trains LLMs on this corpus to adequately understand graph structures.
arXiv Detail & Related papers (2025-01-20T13:20:41Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.<n>We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.<n>Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - GUNDAM: Aligning Large Language Models with Graph Understanding [10.080136100700692]
We introduce the textbfGraph textbfUnderstanding for textbfNatural Language textbfDriven textbfAnalytical textbfModel (model)
This model adapts LLMs to better understand and engage with the structure of graph data, enabling them to perform complex reasoning tasks by leveraging the graph's structure itself.
arXiv Detail & Related papers (2024-09-30T07:59:10Z) - LangTopo: Aligning Language Descriptions of Graphs with Tokenized Topological Modeling [10.907949155931474]
We introduce LangTopo, which aligns graph structure modeling with natural language understanding at the token level.
We demonstrate the effectiveness of our proposed method on multiple datasets.
arXiv Detail & Related papers (2024-06-19T06:20:22Z) - Parameter-Efficient Tuning Large Language Models for Graph Representation Learning [62.26278815157628]
We introduce Graph-aware.
Efficient Fine-Tuning - GPEFT, a novel approach for efficient graph representation learning.
We use a graph neural network (GNN) to encode structural information from neighboring nodes into a graph prompt.
We validate our approach through comprehensive experiments conducted on 8 different text-rich graphs, observing an average improvement of 2% in hit@1 and Mean Reciprocal Rank (MRR) in link prediction evaluations.
arXiv Detail & Related papers (2024-04-28T18:36:59Z) - Graph Language Models [18.75364157933661]
We introduce a novel LM type, the Graph Language Model (GLM), that integrates the strengths of both approaches and mitigates their weaknesses.
We design the GLM's architecture to incorporate graph biases, thereby promoting effective knowledge distribution within the graph.
Empirical evaluations on relation classification tasks show that GLM embeddings surpass both LM- and GNN-based baselines in supervised and zero-shot setting.
arXiv Detail & Related papers (2024-01-13T16:09:49Z) - Large Language Models on Graphs: A Comprehensive Survey [77.16803297418201]
We provide a systematic review of scenarios and techniques related to large language models on graphs.
We first summarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs.
We discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets.
arXiv Detail & Related papers (2023-12-05T14:14:27Z) - Disentangled Representation Learning with Large Language Models for
Text-Attributed Graphs [57.052160123387104]
We present the Disentangled Graph-Text Learner (DGTL) model, which is able to enhance the reasoning and predicting capabilities of LLMs for TAGs.
Our proposed DGTL model incorporates graph structure information through tailored disentangled graph neural network (GNN) layers.
Experimental evaluations demonstrate the effectiveness of the proposed DGTL model on achieving superior or comparable performance over state-of-the-art baselines.
arXiv Detail & Related papers (2023-10-27T14:00:04Z) - Can LLMs Effectively Leverage Graph Structural Information through Prompts, and Why? [18.328637750057037]
Large language models (LLMs) are gaining increasing attention for their capability to process graphs with rich text attributes.
We aim to understand why the incorporation of structural information inherent in graph data can improve the prediction performance of LLMs.
arXiv Detail & Related papers (2023-09-28T16:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.