Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
- URL: http://arxiv.org/abs/2506.05939v1
- Date: Fri, 06 Jun 2025 10:07:21 GMT
- Title: Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graphs for Retrieval-Augmented Generation
- Authors: Ze Yu Zhang, Zitao Li, Yaliang Li, Bolin Ding, Bryan Kian Hsiang Low,
- Abstract summary: We develop a robust and discriminative QA benchmark to measure temporal, causal, and character consistency understanding in narrative documents.<n>We then introduce Entity-Event RAG (E2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping.<n>Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries.
- Score: 69.45495166424642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrieval-augmented generation (RAG) based on large language models often falters on narrative documents with inherent temporal structures. Standard unstructured RAG methods rely solely on embedding-similarity matching and lack any general mechanism to encode or exploit chronological information, while knowledge graph RAG (KG-RAG) frameworks collapse every mention of an entity into a single node, erasing the evolving context that drives many queries. To formalize this challenge and draw the community's attention, we construct ChronoQA, a robust and discriminative QA benchmark that measures temporal, causal, and character consistency understanding in narrative documents (e.g., novels) under the RAG setting. We then introduce Entity-Event RAG (E^2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping, thereby preserving the temporal and causal facets needed for fine-grained reasoning. Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries. E^2RAG therefore offers a practical path to more context-aware retrieval for tasks that require precise answers grounded in chronological information.
Related papers
- T-GRAG: A Dynamic GraphRAG Framework for Resolving Temporal Conflicts and Redundancy in Knowledge Retrieval [4.114480531154174]
We propose Temporal GraphRAG (T-GRAG), a dynamic, temporally-aware RAG framework that models the evolution of knowledge over time.<n>T-GRAG consists of five key components: (1) a Temporal Knowledge Graph Generator that creates time-stamped, evolving graph structures; (2) a Temporal Query Decomposition mechanism that breaks complex temporal queries into manageable sub-queries; and (3) a Three-layer Interactive Retriever that progressively filters and refines retrieval across temporal subgraphs.<n>Extensive experiments show that T-GRAG significantly outperforms prior RAG and GraphRAG baselines in both retrieval accuracy and response
arXiv Detail & Related papers (2025-08-03T09:15:36Z) - Reading Between the Timelines: RAG for Answering Diachronic Questions [8.969698902720799]
We propose a new framework that fundamentally redesigns the RAG pipeline to infuse temporal logic.<n>Our approach yields substantial gains in answer accuracy, surpassing standard RAG implementations by 13% to 27%.<n>This work provides a validated pathway toward RAG systems capable of performing the nuanced, evolutionary analysis required for complex, real-world questions.
arXiv Detail & Related papers (2025-07-21T05:19:41Z) - DyG-RAG: Dynamic Graph Retrieval-Augmented Generation with Event-Centric Reasoning [38.28580037356542]
We introduce DyG-RAG, a novel event-centric dynamic graph retrieval-augmented generation framework.<n>To capture and reason over temporal knowledge embedded in unstructured text, DyG-RAG proposes Dynamic Event Units (DEUs)<n>To ensure temporally consistent generation, DyG-RAG introduces an event timeline retrieval pipeline.
arXiv Detail & Related papers (2025-07-16T10:22:35Z) - Evaluating List Construction and Temporal Understanding capabilities of Large Language Models [54.39278049092508]
Large Language Models (LLMs) are susceptible to hallucinations and errors on particularly temporal understanding tasks.<n>We propose the Time referenced List based Question Answering (TLQA) benchmark that requires structured answers in list format aligned with corresponding time periods.<n>We investigate the temporal understanding and list construction capabilities of state-of-the-art generative models on TLQA in closed-book and open-domain settings.
arXiv Detail & Related papers (2025-06-26T21:40:58Z) - SlimRAG: Retrieval without Graphs via Entity-Aware Context Selection [38.200971604630524]
SlimRAG is a lightweight framework for retrieval without graphs.<n>It replaces structure-heavy components with a simple yet effective entity-aware mechanism.<n> Experiments show that SlimRAG outperforms strong flat and graph-based baselines in accuracy.
arXiv Detail & Related papers (2025-06-15T15:36:17Z) - Align-GRAG: Reasoning-Guided Dual Alignment for Graph Retrieval-Augmented Generation [75.9865035064794]
Large language models (LLMs) have demonstrated remarkable capabilities, but still struggle with issues like hallucinations and outdated information.<n>Retrieval-augmented generation (RAG) addresses these issues by grounding LLM outputs in external knowledge with an Information Retrieval (IR) system.<n>We propose Align-GRAG, a novel reasoning-guided dual alignment framework in post-retrieval phrase.
arXiv Detail & Related papers (2025-05-22T05:15:27Z) - AlignRAG: Leveraging Critique Learning for Evidence-Sensitive Retrieval-Augmented Reasoning [61.28113271728859]
RAG has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>Standard RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>In this work, we reinterpret RAG as Retrieval-Augmented Reasoning and identify a central but underexplored problem: textitReasoning Misalignment.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - CausalRAG: Integrating Causal Graphs into Retrieval-Augmented Generation [11.265999775635823]
CausalRAG is a novel framework that incorporates causal graphs into the retrieval process.<n>By constructing and tracing causal relationships, CausalRAG preserves contextual continuity and improves retrieval precision.<n>Our findings suggest that grounding retrieval in causal reasoning provides a promising approach to knowledge-intensive tasks.
arXiv Detail & Related papers (2025-03-25T17:43:08Z) - Talking to GDELT Through Knowledge Graphs [0.6461717749486492]
We study various Retrieval Augmented Regeneration (RAG) approaches to gain an understanding of the strengths and weaknesses of each approach in a question-answering analysis.<n>To retrieve information from the text corpus we implement a traditional vector store RAG as well as state-of-the-art large language model (LLM) based approaches.
arXiv Detail & Related papers (2025-03-10T17:48:10Z) - TrustRAG: An Information Assistant with Retrieval Augmented Generation [73.84864898280719]
TrustRAG is a novel framework that enhances acRAG from three perspectives: indexing, retrieval, and generation.<n>We open-source the TrustRAG framework and provide a demonstration studio designed for excerpt-based question answering tasks.
arXiv Detail & Related papers (2025-02-19T13:45:27Z) - ArchRAG: Attributed Community-based Hierarchical Retrieval-Augmented Generation [16.204046295248546]
Retrieval-Augmented Generation (RAG) has proven effective in integrating external knowledge into large language models (LLMs)<n>We introduce a novel graph-based RAG approach, called Attributed Community-based Hierarchical RAG (ArchRAG)<n>We build a novel hierarchical index structure for the attributed communities and develop an effective online retrieval method.<n>ArchRAG has been successfully applied to domain knowledge QA in Huawei Cloud Computing.
arXiv Detail & Related papers (2025-02-14T03:28:36Z) - MemoRAG: Boosting Long Context Processing with Global Memory-Enhanced Retrieval Augmentation [60.04380907045708]
Retrieval-Augmented Generation (RAG) is considered a promising strategy to address this problem.<n>We propose MemoRAG, a novel RAG framework empowered by global memory-augmented retrieval.<n>MemoRAG achieves superior performances across a variety of long-context evaluation tasks.
arXiv Detail & Related papers (2024-09-09T13:20:31Z) - Learning Granularity Representation for Temporal Knowledge Graph Completion [2.689675451882683]
Temporal Knowledge Graphs (TKGs) incorporate temporal information to reflect the dynamic structural knowledge and evolutionary patterns of real-world facts.
This paper proposes textbfLearning textbfGranularity textbfRepresentation (termed $mathsfLGRe$) for TKG completion.
It comprises two main components: Granularity Learning (GRL) and Adaptive Granularity Balancing (AGB)
arXiv Detail & Related papers (2024-08-27T08:19:34Z) - HiSMatch: Historical Structure Matching based Temporal Knowledge Graph
Reasoning [59.38797474903334]
This paper proposes the textbfHistorical textbfStructure textbfMatching (textbfHiSMatch) model.
It applies two structure encoders to capture the semantic information contained in the historical structures of the query and candidate entities.
Experiments on six benchmark datasets demonstrate the significant improvement of the proposed HiSMatch model, with up to 5.6% performance improvement in MRR, compared to the state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-18T09:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.