Guided Navigation in Knowledge-Dense Environments: Structured Semantic Exploration with Guidance Graphs
- URL: http://arxiv.org/abs/2508.10012v1
- Date: Wed, 06 Aug 2025 08:47:57 GMT
- Title: Guided Navigation in Knowledge-Dense Environments: Structured Semantic Exploration with Guidance Graphs
- Authors: Dehao Tao, Guangjie Liu, Weizheng, Yongfeng Huang, Minghu jiang,
- Abstract summary: We propose a novel framework that introduces an intermediate Guidance Graph to bridge unstructured queries and structured knowledge retrieval.<n>The Guidance Graph defines the retrieval space by abstracting the target knowledge' s structure while preserving broader semantic context.<n>Our method achieves superior efficiency and outperforms SOTA, especially on complex tasks.
- Score: 21.84798899012135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Large Language Models (LLMs) exhibit strong linguistic capabilities, their reliance on static knowledge and opaque reasoning processes limits their performance in knowledge intensive tasks. Knowledge graphs (KGs) offer a promising solution, but current exploration methods face a fundamental trade off: question guided approaches incur redundant exploration due to granularity mismatches, while clue guided methods fail to effectively leverage contextual information for complex scenarios. To address these limitations, we propose Guidance Graph guided Knowledge Exploration (GG Explore), a novel framework that introduces an intermediate Guidance Graph to bridge unstructured queries and structured knowledge retrieval. The Guidance Graph defines the retrieval space by abstracting the target knowledge' s structure while preserving broader semantic context, enabling precise and efficient exploration. Building upon the Guidance Graph, we develop: (1) Structural Alignment that filters incompatible candidates without LLM overhead, and (2) Context Aware Pruning that enforces semantic consistency with graph constraints. Extensive experiments show our method achieves superior efficiency and outperforms SOTA, especially on complex tasks, while maintaining strong performance with smaller LLMs, demonstrating practical value.
Related papers
- HELP: HyperNode Expansion and Logical Path-Guided Evidence Localization for Accurate and Efficient GraphRAG [53.30561659838455]
Large Language Models (LLMs) often struggle with inherent knowledge boundaries and hallucinations.<n>Retrieval-Augmented Generation (RAG) frequently overlooks structural interdependencies essential for multi-hop reasoning.<n>Help achieves competitive performance across multiple simple and multi-hop QA benchmarks and up to a 28.8$times$ speedup over leading Graph-based RAG baselines.
arXiv Detail & Related papers (2026-02-24T14:05:29Z) - Enrich-on-Graph: Query-Graph Alignment for Complex Reasoning with LLM Enriching [61.824094419641575]
Large Language Models (LLMs) struggle with hallucinations and factual errors in knowledge-intensive scenarios like knowledge graph question answering (KGQA)<n>We attribute this to the semantic gap between structured knowledge graphs (KGs) and unstructured queries, caused by inherent differences in their focuses and structures.<n>Existing methods usually employ resource-intensive, non-scalable reasoning on vanilla KGs, but overlook this gap.<n>We propose a flexible framework, Enrich-on-Graph (EoG), which leverages LLMs' prior knowledge to enrich KGs, bridge the semantic gap between graphs and queries.
arXiv Detail & Related papers (2025-09-25T06:48:52Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning [52.78024385391959]
Knowledge graph completion (KGC) aims to infer new knowledge and make predictions from knowledge graphs.<n>Existing methods often ignore the inconsistent representation spaces between natural language and graph structures.<n>We propose SAT, a novel framework that enhances LLMs for KGC via structure-aware alignment-tuning.
arXiv Detail & Related papers (2025-09-01T06:38:11Z) - Mixture of Length and Pruning Experts for Knowledge Graphs Reasoning [9.894106590443714]
We propose textbfMoKGR, a mixture-of-experts framework that personalizes path exploration.<n>MoKGR demonstrates superior performance in both transductive and inductive settings.
arXiv Detail & Related papers (2025-07-28T03:30:28Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - KnowTrace: Bootstrapping Iterative Retrieval-Augmented Generation with Structured Knowledge Tracing [64.38243807002878]
We present KnowTrace, an elegant RAG framework to mitigate the context overload in large language models.<n>KnowTrace autonomously traces out desired knowledge triplets to organize a specific knowledge graph relevant to the input question.<n>It consistently surpasses existing methods across three multi-hop question answering benchmarks.
arXiv Detail & Related papers (2025-05-26T17:22:20Z) - Path Pooling: Training-Free Structure Enhancement for Efficient Knowledge Graph Retrieval-Augmented Generation [19.239478003379478]
Large Language Models suffer from hallucinations and knowledge deficiencies in real-world applications.<n>We propose path pooling, a training-free strategy that introduces structure information through a novel path-centric pooling operation.<n>It seamlessly integrates into existing KG-RAG methods in a plug-and-play manner, enabling richer structure information utilization.
arXiv Detail & Related papers (2025-03-07T07:48:30Z) - In-Context Learning with Topological Information for Knowledge Graph Completion [3.035601871864059]
We develop a novel method that incorporates topological information through in-context learning to enhance knowledge graph performance.<n>Our approach achieves strong performance in the transductive setting i.e., nodes in the test graph dataset are present in the training graph dataset.<n>Our method demonstrates superior performance compared to baselines on the ILPC-small and ILPC-large datasets.
arXiv Detail & Related papers (2024-12-11T19:29:36Z) - GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning method that merges parametric and non-parametric memories to improve accurate reasoning with minimal external input.<n>GIVE guides the LLM agent to select the most pertinent expert data (observe), engage in query-specific divergent thinking (reflect), and then synthesize this information to produce the final output (speak)
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Guideline Learning for In-context Information Extraction [29.062173997909028]
In-context Information Extraction (IE) has recently garnered attention in the research community.
We highlight a key reason for this shortfall: underspecified task description.
We propose a Guideline Learning framework for In-context IE which reflectively learns and follows guidelines.
arXiv Detail & Related papers (2023-10-08T08:25:16Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.