LINK-KG: LLM-Driven Coreference-Resolved Knowledge Graphs for Human Smuggling Networks
- URL: http://arxiv.org/abs/2510.26486v1
- Date: Thu, 30 Oct 2025 13:39:08 GMT
- Title: LINK-KG: LLM-Driven Coreference-Resolved Knowledge Graphs for Human Smuggling Networks
- Authors: Dipak Meher, Carlotta Domeniconi, Guadalupe Correa-Cabrera,
- Abstract summary: Link-KG is a framework that integrates a three-stage, LLM-guided coreference resolution pipeline with downstream KG extraction.<n>At the core of our approach is a type-specific Prompt Cache, which consistently tracks and resolves references across document chunks.<n>Link-KG reduces average node duplication by 45.21% and noisy nodes by 32.22% compared to baseline methods.
- Score: 8.222584338135986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human smuggling networks are complex and constantly evolving, making them difficult to analyze comprehensively. Legal case documents offer rich factual and procedural insights into these networks but are often long, unstructured, and filled with ambiguous or shifting references, posing significant challenges for automated knowledge graph (KG) construction. Existing methods either overlook coreference resolution or fail to scale beyond short text spans, leading to fragmented graphs and inconsistent entity linking. We propose LINK-KG, a modular framework that integrates a three-stage, LLM-guided coreference resolution pipeline with downstream KG extraction. At the core of our approach is a type-specific Prompt Cache, which consistently tracks and resolves references across document chunks, enabling clean and disambiguated narratives for structured knowledge graph construction from both short and long legal texts. LINK-KG reduces average node duplication by 45.21% and noisy nodes by 32.22% compared to baseline methods, resulting in cleaner and more coherent graph structures. These improvements establish LINK-KG as a strong foundation for analyzing complex criminal networks.
Related papers
- An LLM-Guided Query-Aware Inference System for GNN Models on Large Knowledge Graphs [4.814637416425641]
This paper presents KG-WISE, a task-driven inference paradigm for large knowledge graphs (KGs)<n> KG-WISE decomposes trained GNN models into fine-grained components that can be partially loaded based on the structure of the queried subgraph.<n>It achieves up to 28x faster inference and 98% lower memory usage than state-of-the-art systems.
arXiv Detail & Related papers (2026-03-04T19:30:14Z) - HELP: HyperNode Expansion and Logical Path-Guided Evidence Localization for Accurate and Efficient GraphRAG [53.30561659838455]
Large Language Models (LLMs) often struggle with inherent knowledge boundaries and hallucinations.<n>Retrieval-Augmented Generation (RAG) frequently overlooks structural interdependencies essential for multi-hop reasoning.<n>Help achieves competitive performance across multiple simple and multi-hop QA benchmarks and up to a 28.8$times$ speedup over leading Graph-based RAG baselines.
arXiv Detail & Related papers (2026-02-24T14:05:29Z) - SocraticKG: Knowledge Graph Construction via QA-Driven Fact Extraction [4.867319754310031]
We propose an automated KG construction method that introduces question-answer pairs as a structured intermediate representation.<n>SocraticKG captures contextual dependencies and implicit relational links typically lost in direct KG extraction pipelines.
arXiv Detail & Related papers (2026-01-15T02:26:51Z) - Inside CORE-KG: Evaluating Structured Prompting and Coreference Resolution for Knowledge Graphs [9.241360770841013]
Legal case documents offer critical insights but are often unstructured, lexically dense, and filled with ambiguous or shifting references.<n> CORE-KG framework addresses these limitations by integrating a type-aware coreference module and domain-guided structured prompts.<n>Our results show that removing coreference resolution results in a 28.32% increase in node duplication and a 4.32% increase in noisy nodes, while removing structured prompts leads to a 4.34% increase in node duplication and a 73.33% increase in noisy nodes.
arXiv Detail & Related papers (2025-10-30T14:05:55Z) - Enrich-on-Graph: Query-Graph Alignment for Complex Reasoning with LLM Enriching [61.824094419641575]
Large Language Models (LLMs) struggle with hallucinations and factual errors in knowledge-intensive scenarios like knowledge graph question answering (KGQA)<n>We attribute this to the semantic gap between structured knowledge graphs (KGs) and unstructured queries, caused by inherent differences in their focuses and structures.<n>Existing methods usually employ resource-intensive, non-scalable reasoning on vanilla KGs, but overlook this gap.<n>We propose a flexible framework, Enrich-on-Graph (EoG), which leverages LLMs' prior knowledge to enrich KGs, bridge the semantic gap between graphs and queries.
arXiv Detail & Related papers (2025-09-25T06:48:52Z) - Cross-Granularity Hypergraph Retrieval-Augmented Generation for Multi-hop Question Answering [49.43814054718318]
Multi-hop question answering (MHQA) requires integrating knowledge scattered across multiple passages to derive the correct answer.<n>Traditional retrieval-augmented generation (RAG) methods primarily focus on coarse-grained textual semantic similarity.<n>We propose a novel RAG approach called HGRAG for MHQA that achieves cross-granularity integration of structural and semantic information via hypergraphs.
arXiv Detail & Related papers (2025-08-15T06:36:13Z) - CORE-KG: An LLM-Driven Knowledge Graph Construction Framework for Human Smuggling Networks [9.68109098750283]
CORE-KG is a modular framework for building interpretable knowledge graphs from legal texts.<n>It reduces node duplication by 33.28%, and legal noise by 38.37% compared to a GraphRAG-based baseline.
arXiv Detail & Related papers (2025-06-20T11:58:00Z) - Divide by Question, Conquer by Agent: SPLIT-RAG with Question-Driven Graph Partitioning [62.640169289390535]
SPLIT-RAG is a multi-agent RAG framework that addresses the limitations with question-driven semantic graph partitioning and collaborative subgraph retrieval.<n>The innovative framework first create Semantic Partitioning of Linked Information, then use the Type-Specialized knowledge base to achieve Multi-Agent RAG.<n>The attribute-aware graph segmentation manages to divide knowledge graphs into semantically coherent subgraphs, ensuring subgraphs align with different query types.<n>A hierarchical merging module resolves inconsistencies across subgraph-derived answers through logical verifications.
arXiv Detail & Related papers (2025-05-20T06:44:34Z) - Talking to GDELT Through Knowledge Graphs [0.6153162958674417]
We study various Retrieval Augmented Regeneration (RAG) approaches to gain an understanding of the strengths and weaknesses of each approach in a question-answering analysis.<n>To retrieve information from the text corpus we implement a traditional vector store RAG as well as state-of-the-art large language model (LLM) based approaches.
arXiv Detail & Related papers (2025-03-10T17:48:10Z) - Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains [66.55612528039894]
Knowledge Graphs (KGs) can serve as reliable knowledge sources for question answering (QA)
We present DoG (Decoding on Graphs), a novel framework that facilitates a deep synergy between LLMs and KGs.
Experiments across various KGQA tasks with different background KGs demonstrate that DoG achieves superior and robust performance.
arXiv Detail & Related papers (2024-10-24T04:01:40Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.