Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion
- URL: http://arxiv.org/abs/2510.10849v1
- Date: Sun, 12 Oct 2025 23:25:16 GMT
- Title: Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion
- Authors: Donald Loveland, Yao-An Yang, Danai Koutra,
- Abstract summary: We propose GLANCE, a framework that invokes an LLM to refine a GNN's prediction.<n>We show that GLANCE achieves the best performance balance across node subgroups.<n>Our findings highlight the value of adaptive, node-aware GNN-LLM architectures.
- Score: 11.093748269263486
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning on text-attributed graphs has motivated the use of Large Language Models (LLMs) for graph learning. However, most fusion strategies are applied uniformly across all nodes and attain only small overall performance gains. We argue this result stems from aggregate metrics that obscure when LLMs provide benefit, inhibiting actionable signals for new strategies. In this work, we reframe LLM-GNN fusion around nodes where GNNs typically falter. We first show that performance can significantly differ between GNNs and LLMs, with each excelling on distinct structural patterns, such as local homophily. To leverage this finding, we propose GLANCE (GNN with LLM Assistance for Neighbor- and Context-aware Embeddings), a framework that invokes an LLM to refine a GNN's prediction. GLANCE employs a lightweight router that, given inexpensive per-node signals, decides whether to query the LLM. Since the LLM calls are non-differentiable, the router is trained with an advantage-based objective that compares the utility of querying the LLM against relying solely on the GNN. Across multiple benchmarks, GLANCE achieves the best performance balance across node subgroups, achieving significant gains on heterophilous nodes (up to $+13\%$) while simultaneously achieving top overall performance. Our findings highlight the value of adaptive, node-aware GNN-LLM architectures, where selectively invoking the LLM enables scalable deployment on large graphs without incurring high computational costs.
Related papers
- Enhancing Spectral Graph Neural Networks with LLM-Predicted Homophily [48.135717446964385]
Spectral Graph Neural Networks (SGNNs) have achieved remarkable performance in tasks such as node classification.<n>We propose a novel framework that leverages Large Language Models (LLMs) to estimate the homophily level of a graph.<n>Our framework consistently improves performance over strong SGNN baselines.
arXiv Detail & Related papers (2025-06-17T06:17:19Z) - Refining Interactions: Enhancing Anisotropy in Graph Neural Networks with Language Semantics [6.273224130511677]
We introduce LanSAGNN (Language Semantic Anisotropic Graph Neural Network), a framework that extends the concept of anisotropic GNNs to the natural language level.<n>We propose an efficient dual-layer LLMs finetuning architecture to better align LLMs' outputs with graph tasks.
arXiv Detail & Related papers (2025-04-02T07:32:45Z) - GL-Fusion: Rethinking the Combination of Graph Neural Network and Large Language model [63.774726052837266]
We introduce a new architecture that deeply integrates Graph Neural Networks (GNNs) with Large Language Models (LLMs)<n>We introduce three key innovations: (1) Structure-Aware Transformers, which incorporate GNN's message-passing capabilities directly into LLM's transformer layers; (2) Graph-Text Cross-Attention, which processes full, uncompressed text from graph nodes and edges; and (3) GNN-LLM Twin Predictor, enabling LLM's flexible autoregressive generation alongside GNN's scalable one-pass prediction.
arXiv Detail & Related papers (2024-12-08T05:49:58Z) - Can Large Language Models Act as Ensembler for Multi-GNNs? [10.126044401037968]
Graph Neural Networks (GNNs) have emerged as powerful models for learning from graph-structured data.<n>GNNs lack the inherent semantic understanding capability of rich textual node attributes, limiting their effectiveness in applications.<n>This research advances text-attributed graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information.
arXiv Detail & Related papers (2024-10-22T08:48:52Z) - How to Make LLMs Strong Node Classifiers? [70.14063765424012]
Language Models (LMs) are challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs)<n>We propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks.
arXiv Detail & Related papers (2024-10-03T08:27:54Z) - All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks [51.19110891434727]
Large Language Models (LLMs) with pretrained knowledge and powerful semantic comprehension abilities have recently shown a remarkable ability to benefit applications using vision and text data.
E-LLaGNN is a framework with an on-demand LLM service that enriches message passing procedure of graph learning by enhancing a limited fraction of nodes from the graph.
arXiv Detail & Related papers (2024-07-20T22:09:42Z) - LOGIN: A Large Language Model Consulted Graph Neural Network Training Framework [30.54068909225463]
We aim to streamline the GNN design process and leverage the advantages of Large Language Models (LLMs) to improve the performance of GNNs on downstream tasks.
We formulate a new paradigm, coined "LLMs-as-Consultants," which integrates LLMs with GNNs in an interactive manner.
We empirically evaluate the effectiveness of LOGIN on node classification tasks across both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2024-05-22T18:17:20Z) - Label-free Node Classification on Graphs with Large Language Models
(LLMS) [46.937442239949256]
This work introduces a label-free node classification on graphs with Large Language Models pipeline, LLM-GNN.
Itates the strengths of both GNNs and LLMs while mitigating their limitations.
In particular, LLM-GNN can achieve an accuracy of 74.9% on a vast-scale dataset with a cost less than 1 dollar.
arXiv Detail & Related papers (2023-10-07T03:14:11Z) - Exploring the Potential of Large Language Models (LLMs) in Learning on
Graphs [59.74814230246034]
Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities.
We investigate two possible pipelines: LLMs-as-Enhancers and LLMs-as-Predictors.
arXiv Detail & Related papers (2023-07-07T05:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.