Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling
- URL: http://arxiv.org/abs/2512.23959v2
- Date: Fri, 02 Jan 2026 05:05:46 GMT
- Title: Improving Multi-step RAG with Hypergraph-based Memory for Long-Context Complex Relational Modeling
- Authors: Chulun Zhou, Chunkang Zhang, Guoxin Yu, Fandong Meng, Jie Zhou, Wai Lam, Mo Yu,
- Abstract summary: Multi-step retrieval-augmented generation (RAG) has become a widely adopted strategy for enhancing large language models (LLMs)<n>We introduce HGMem, a hypergraph-based memory mechanism that extends the concept of memory into a dynamic, expressive structure for complex reasoning and global understanding.<n>In our approach, memory is represented as a hypergraph whose hyperedges correspond to distinct memory units, enabling the progressive formation of higher-order interactions within memory.
- Score: 83.29209853451697
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-step retrieval-augmented generation (RAG) has become a widely adopted strategy for enhancing large language models (LLMs) on tasks that demand global comprehension and intensive reasoning. Many RAG systems incorporate a working memory module to consolidate retrieved information. However, existing memory designs function primarily as passive storage that accumulates isolated facts for the purpose of condensing the lengthy inputs and generating new sub-queries through deduction. This static nature overlooks the crucial high-order correlations among primitive facts, the compositions of which can often provide stronger guidance for subsequent steps. Therefore, their representational strength and impact on multi-step reasoning and knowledge evolution are limited, resulting in fragmented reasoning and weak global sense-making capacity in extended contexts. We introduce HGMem, a hypergraph-based memory mechanism that extends the concept of memory beyond simple storage into a dynamic, expressive structure for complex reasoning and global understanding. In our approach, memory is represented as a hypergraph whose hyperedges correspond to distinct memory units, enabling the progressive formation of higher-order interactions within memory. This mechanism connects facts and thoughts around the focal problem, evolving into an integrated and situated knowledge structure that provides strong propositions for deeper reasoning in subsequent steps. We evaluate HGMem on several challenging datasets designed for global sense-making. Extensive experiments and in-depth analyses show that our method consistently improves multi-step RAG and substantially outperforms strong baseline systems across diverse tasks.
Related papers
- Understand Then Memory: A Cognitive Gist-Driven RAG Framework with Global Semantic Diffusion [14.538534837583931]
Retrieval-Augmented Generation (RAG) effectively mitigates hallucinations in LLMs by incorporating external knowledge.<n>We propose CogitoRAG, a RAG framework that simulates human cognitive memory processes.<n>We show that CogitoRAG significantly outperforms state-of-the-art RAG methods, showcasing superior capabilities in complex knowledge integration and reasoning.
arXiv Detail & Related papers (2026-02-11T12:58:08Z) - MemFly: On-the-Fly Memory Optimization via Information Bottleneck [35.420309099411874]
Long-term memory enables large language model agents to tackle complex tasks through historical interactions.<n>Existing frameworks encounter a dilemma between compressing redundant information efficiently and maintaining precise retrieval for downstream tasks.<n>MemFly is a framework grounded in information bottleneck principles that facilitates on-the-fly memory evolution for LLMs.<n>MemFly substantially outperforms state-of-the-art baselines in memory coherence, response fidelity, and accuracy.
arXiv Detail & Related papers (2026-02-08T09:37:25Z) - AMA: Adaptive Memory via Multi-Agent Collaboration [54.490349689939166]
We propose Adaptive Memory via Multi-Agent Collaboration (AMA), a novel framework that leverages coordinated agents to manage memory across multiple granularities.<n>AMA significantly outperforms state-of-the-art baselines while reducing token consumption by approximately 80% compared to full-context methods.
arXiv Detail & Related papers (2026-01-28T08:09:49Z) - The AI Hippocampus: How Far are We From Human Memory? [77.04745635827278]
Implicit memory refers to the knowledge embedded within the internal parameters of pre-trained transformers.<n>Explicit memory involves external storage and retrieval components designed to augment model outputs with dynamic, queryable knowledge representations.<n>Agentic memory introduces persistent, temporally extended memory structures within autonomous agents.
arXiv Detail & Related papers (2026-01-14T03:24:08Z) - Multi-hop Reasoning via Early Knowledge Alignment [68.28168992785896]
Early Knowledge Alignment (EKA) aims to align Large Language Models with contextually relevant retrieved knowledge.<n>EKA significantly improves retrieval precision, reduces cascading errors, and enhances both performance and efficiency.<n>EKA proves effective as a versatile, training-free inference strategy that scales seamlessly to large models.
arXiv Detail & Related papers (2025-12-23T08:14:44Z) - CogMem: A Cognitive Memory Architecture for Sustained Multi-Turn Reasoning in Large Language Models [21.427373172124167]
Large language models (LLMs) excel at single-turn reasoning but often lose accuracy and coherence over extended, multi-turn interactions.<n>We introduce CogMem, a memory-augmented LLM architecture that supports sustained iterative reasoning through structured, persistent memory.<n> Experiments on TurnBench show that this layered design mitigates reasoning failures, controls context growth, and improves consistency across extended reasoning chains.
arXiv Detail & Related papers (2025-12-16T06:01:08Z) - Memory in the Age of AI Agents [217.9368190980982]
This work aims to provide an up-to-date landscape of current agent memory research.<n>We identify three dominant realizations of agent memory, namely token-level, parametric, and latent memory.<n>To support practical development, we compile a comprehensive summary of memory benchmarks and open-source frameworks.
arXiv Detail & Related papers (2025-12-15T17:22:34Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - PISA: A Pragmatic Psych-Inspired Unified Memory System for Enhanced AI Agency [50.712873697511206]
Existing work often lacks adaptability to diverse tasks and overlooks the constructive and task-oriented role of AI agent memory.<n>We propose PISA, a pragmatic, psych-inspired unified memory system that treats memory as a constructive and adaptive process.<n>Our empirical evaluation, conducted on the existing LOCOMO benchmark and our newly proposed AggQA benchmark for data analysis tasks, confirms that PISA sets a new state-of-the-art by significantly enhancing adaptability and long-term knowledge retention.
arXiv Detail & Related papers (2025-10-12T10:34:35Z) - CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension [55.29309306566238]
Current Large Language Models (LLMs) are confronted with overwhelming information volume when comprehending long-form documents.<n>This challenge raises the imperative of a cohesive memory module, which can elevate vanilla LLMs into autonomous reading agents.<n>We draw inspiration from Jean Piaget's Constructivist Theory, illuminating three traits of the agentic memory -- structured schemata, flexible assimilation, and dynamic accommodation.
arXiv Detail & Related papers (2025-10-07T02:16:30Z) - Cognitive Weave: Synthesizing Abstracted Knowledge with a Spatio-Temporal Resonance Graph [2.800801614127705]
This paper introduces Cognitive Weave, a memory framework centered around a multi-layered dynamic resonance graph (GSTR)<n>GSTR manages information as semantically rich insight particles (IPs), which are enriched with resonance keys, signifiers, and situational imprints via a dedicated semantic oracle interface (ISO)<n>A key component of Cognitive Weave is the cognitive process, which includes the synthesis of insight aggregates (AsI) condensed, higher-level knowledge structures.
arXiv Detail & Related papers (2025-06-09T18:00:46Z) - Dynamic Memory-enhanced Transformer for Hyperspectral Image Classification [3.5093938502961763]
Hyperspectral image (HSI) classification remains a challenging task due to the intricate spatial-spectral correlations.<n>Existing transformer models excel in capturing long-range dependencies but often suffer from information redundancy and attention inefficiencies.<n>MemFormer introduces a memory-enhanced multi-head attention mechanism that iteratively refines a dynamic memory module.<n>A dynamic memory enrichment strategy progressively captures complex spatial and spectral dependencies, leading to more expressive feature representations.
arXiv Detail & Related papers (2025-04-17T17:43:34Z) - From RAG to Memory: Non-Parametric Continual Learning for Large Language Models [6.380729797938521]
retrieval-augmented generation (RAG) has become the dominant way to introduce new information.<n>Recent RAG approaches augment vector embeddings with various structures like knowledge graphs to address some gaps, namely sense-making and associativity.<n>We propose HippoRAG 2, a framework that outperforms standard RAG comprehensively on factual, sense-making, and associative memory tasks.
arXiv Detail & Related papers (2025-02-20T18:26:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.