Knowledge Retrieval in LLM Gaming: A Shift from Entity-Centric to Goal-Oriented Graphs
- URL: http://arxiv.org/abs/2505.18607v1
- Date: Sat, 24 May 2025 09:09:20 GMT
- Title: Knowledge Retrieval in LLM Gaming: A Shift from Entity-Centric to Goal-Oriented Graphs
- Authors: Jonathan Leung, Yongjie Wang, Zhiqi Shen,
- Abstract summary: Large Language Models (LLMs) demonstrate impressive general capabilities but often struggle with step-by-step reasoning, especially in complex applications such as games.<n>We propose a novel framework based on Goal-Oriented Graphs (GoGs), where each node represents a goal and its associated attributes, and edges encode logical dependencies between goals.<n>Our method significantly enhances the reasoning ability of LLMs in game-playing tasks, as demonstrated by extensive experiments on the Minecraft testbed, outperforming GraphRAG and other baselines.
- Score: 6.636092764694501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) demonstrate impressive general capabilities but often struggle with step-by-step reasoning, especially in complex applications such as games. While retrieval-augmented methods like GraphRAG attempt to bridge this gap through cross-document extraction and indexing, their fragmented entity-relation graphs and overly dense local connectivity hinder the construction of coherent reasoning. In this paper, we propose a novel framework based on Goal-Oriented Graphs (GoGs), where each node represents a goal and its associated attributes, and edges encode logical dependencies between goals. This structure enables explicit retrieval of reasoning paths by first identifying high-level goals and recursively retrieving their subgoals, forming coherent reasoning chains to guide LLM prompting. Our method significantly enhances the reasoning ability of LLMs in game-playing tasks, as demonstrated by extensive experiments on the Minecraft testbed, outperforming GraphRAG and other baselines.
Related papers
- Inference Scaled GraphRAG: Improving Multi Hop Question Answering on Knowledge Graphs [15.036480111358369]
Large Language Models (LLMs) have achieved impressive capabilities in language understanding and generation.<n>They continue to underperform on knowledge-intensive reasoning tasks due to limited access to structured context and multi-hop information.<n>We introduce Inference-Scaled GraphRAG, a novel framework that enhances LLM-based graph reasoning by applying inference-time compute scaling.
arXiv Detail & Related papers (2025-06-24T19:31:03Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation [75.9865035064794]
Large language models (LLMs) have demonstrated remarkable capabilities, but still struggle with issues like hallucinations and outdated information.<n>Retrieval-augmented generation (RAG) addresses these issues by grounding LLM outputs in external knowledge with an Information Retrieval (IR) system.<n>We propose Align-GRAG, a novel reasoning-guided dual alignment framework in post-retrieval phrase.
arXiv Detail & Related papers (2025-05-22T05:15:27Z) - Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning [73.2950349728376]
Large language models (LLMs) have demonstrated remarkable success across a wide range of tasks.<n>However, they still encounter challenges in reasoning tasks that require understanding and inferring relationships between pieces of information.<n>This challenge is particularly pronounced in tasks involving multi-step processes, such as logical reasoning and multi-hop question answering.<n>We propose Reasoning with Graphs (RwG) by first constructing explicit graphs from the context.
arXiv Detail & Related papers (2025-01-14T05:18:20Z) - LEGO-GraphRAG: Modularizing Graph-based Retrieval-Augmented Generation for Design Space Exploration [17.514586423233872]
We propose LEGO-GraphRAG, a modular framework that enables fine-grained decomposition of the GraphRAG workflow.<n>Our framework facilitates comprehensive empirical studies of GraphRAG on large-scale real-world graphs and diverse query sets.
arXiv Detail & Related papers (2024-11-06T15:32:28Z) - NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models [26.739650151993928]
Graphs are a fundamental data structure for representing relationships in real-world scenarios.
Applying Large Language Models (LLMs) to graph-related tasks poses significant challenges.
We introduce Node Tokenizer for Large Language Models (NT-LLM), a novel framework that efficiently encodes graph structures.
arXiv Detail & Related papers (2024-10-14T17:21:57Z) - Debate on Graph: a Flexible and Reliable Reasoning Framework for Large Language Models [33.662269036173456]
Large Language Models (LLMs) may suffer from hallucinations in real-world applications due to the lack of relevant knowledge.
Knowledge Graph Question Answering (KGQA) serves as a critical touchstone for the integration.
We propose an interactive KGQA framework that leverages the interactive learning capabilities of LLMs to perform reasoning and Debating over Graphs (DoG)
arXiv Detail & Related papers (2024-09-05T01:11:58Z) - Revisiting the Graph Reasoning Ability of Large Language Models: Case Studies in Translation, Connectivity and Shortest Path [53.71787069694794]
We focus on the graph reasoning ability of Large Language Models (LLMs)<n>We revisit the ability of LLMs on three fundamental graph tasks: graph description translation, graph connectivity, and the shortest-path problem.<n>Our findings suggest that LLMs can fail to understand graph structures through text descriptions and exhibit varying performance for all these fundamental tasks.
arXiv Detail & Related papers (2024-08-18T16:26:39Z) - Can LLM Graph Reasoning Generalize beyond Pattern Memorization? [46.93972334344908]
We evaluate whether large language models (LLMs) can go beyond semantic, numeric, structural, reasoning patterns in the synthetic training data and improve utility on real-world graph-based tasks.
We find that while post-training alignment is most promising for real-world tasks, empowering LLM graph reasoning to go beyond pattern remains an open research question.
arXiv Detail & Related papers (2024-06-23T02:59:15Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Learning Rational Subgoals from Demonstrations and Instructions [71.86713748450363]
We present a framework for learning useful subgoals that support efficient long-term planning to achieve novel goals.
At the core of our framework is a collection of rational subgoals (RSGs), which are essentially binary classifiers over the environmental states.
Given a goal description, the learned subgoals and the derived dependencies facilitate off-the-shelf planning algorithms, such as A* and RRT.
arXiv Detail & Related papers (2023-03-09T18:39:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.