G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems
- URL: http://arxiv.org/abs/2506.07398v2
- Date: Mon, 16 Jun 2025 08:45:10 GMT
- Title: G-Memory: Tracing Hierarchical Memory for Multi-Agent Systems
- Authors: Guibin Zhang, Muxin Fu, Guancheng Wan, Miao Yu, Kun Wang, Shuicheng Yan,
- Abstract summary: Large language model (LLM)-powered multi-agent systems (MAS) have demonstrated cognitive and execution capabilities that far exceed those of single LLM agents.<n>We introduce G-Memory, a hierarchical, agentic memory system for MAS inspired by organizational memory theory.<n>G-Memory improves success rates in embodied action and accuracy in knowledge QA by up to $20.89%$ and $10.12%$, respectively.
- Score: 44.844636264484905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language model (LLM)-powered multi-agent systems (MAS) have demonstrated cognitive and execution capabilities that far exceed those of single LLM agents, yet their capacity for self-evolution remains hampered by underdeveloped memory architectures. Upon close inspection, we are alarmed to discover that prevailing MAS memory mechanisms (1) are overly simplistic, completely disregarding the nuanced inter-agent collaboration trajectories, and (2) lack cross-trial and agent-specific customization, in stark contrast to the expressive memory developed for single agents. To bridge this gap, we introduce G-Memory, a hierarchical, agentic memory system for MAS inspired by organizational memory theory, which manages the lengthy MAS interaction via a three-tier graph hierarchy: insight, query, and interaction graphs. Upon receiving a new user query, G-Memory performs bi-directional memory traversal to retrieve both $\textit{high-level, generalizable insights}$ that enable the system to leverage cross-trial knowledge, and $\textit{fine-grained, condensed interaction trajectories}$ that compactly encode prior collaboration experiences. Upon task execution, the entire hierarchy evolves by assimilating new collaborative trajectories, nurturing the progressive evolution of agent teams. Extensive experiments across five benchmarks, three LLM backbones, and three popular MAS frameworks demonstrate that G-Memory improves success rates in embodied action and accuracy in knowledge QA by up to $20.89\%$ and $10.12\%$, respectively, without any modifications to the original frameworks. Our codes are available at https://github.com/bingreeky/GMemory.
Related papers
- Choosing How to Remember: Adaptive Memory Structures for LLM Agents [43.27579458682491]
Memory is critical for enabling large language model (LLM) based agents to maintain coherent behavior over long-horizon interactions.<n>We propose a unified framework, FluxMem, that enables adaptive memory organization for LLM agents.<n> Experiments on two long-horizon benchmarks, PERSONAMEM and LoCoMo, demonstrate that our method achieves average improvements of 9.18% and 6.14%.
arXiv Detail & Related papers (2026-02-15T07:56:24Z) - Graph-based Agent Memory: Taxonomy, Techniques, and Applications [63.70340159016138]
Memory emerges as the core module in the Large Language Model (LLM)-based agents for long-horizon complex tasks.<n>Among diverse paradigms, graph stands out as a powerful structure for agent memory due to the intrinsic capabilities to model relational dependencies.<n>This survey presents a comprehensive review of agent memory from the graph-based perspective.
arXiv Detail & Related papers (2026-02-05T13:49:05Z) - LatentMem: Customizing Latent Memory for Multi-Agent Systems [44.59989123744384]
We propose LatentMem, a learnable multi-agent memory framework designed to customize agent-specific memories in a token-efficient manner.<n>Specifically, LatentMem comprises an experience bank that stores raw interaction trajectories in a lightweight form, and a memory composer that synthesizes compact latent memories conditioned on retrieved experience and agent-specific contexts.
arXiv Detail & Related papers (2026-02-03T03:03:16Z) - MemEvolve: Meta-Evolution of Agent Memory Systems [66.09735157017558]
Self-evolving memory systems are unprecedentedly reshaping the evolutionary paradigm of large language model (LLM)-based agents.<n>MemeEvolve is a meta-evolutionary framework that jointly evolves agents' experiential knowledge and their memory architecture.<n> EvolveLab is a unified self-evolving memory that distills twelve representative memory systems into a modular design space.
arXiv Detail & Related papers (2025-12-21T14:26:14Z) - Remember Me, Refine Me: A Dynamic Procedural Memory Framework for Experience-Driven Agent Evolution [52.76038908826961]
We propose $textbfReMe$ ($textitRemember Me, Refine Me$) to bridge the gap between static storage and dynamic reasoning.<n>ReMe innovates across the memory lifecycle via three mechanisms: $textitmulti-faceted distillation$, which extracts fine-grained experiences.<n>Experiments on BFCL-V3 and AppWorld demonstrate that ReMe establishes a new state-of-the-art in agent memory system.
arXiv Detail & Related papers (2025-12-11T14:40:01Z) - EvoMem: Improving Multi-Agent Planning with Dual-Evolving Memory [2.9578217823740065]
We present EvoMem, a multi-agent framework built on a dual-evolving memory mechanism.<n>We show consistent performance improvements on trip planning, meeting planning, and calendar scheduling.<n>This success underscores the importance of memory in enhancing multi-agent planning.
arXiv Detail & Related papers (2025-11-01T01:38:07Z) - AUGUSTUS: An LLM-Driven Multimodal Agent System with Contextualized User Memory [44.51052183152175]
We present AUGUSTUS, a multimodal agent system aligned with the ideas of human memory in cognitive science.<n>Unlike existing systems that use vector databases, we propose conceptualizing information into semantic tags and associating the tags with their context to store them in a graph-structured multimodal contextual memory for efficient concept-driven retrieval.<n>Our system outperforms the traditional multimodal RAG approach while being 3.5 times faster for ImageNet classification and outperforming MemGPT on the MSC benchmark.
arXiv Detail & Related papers (2025-10-17T02:58:22Z) - MemGen: Weaving Generative Latent Memory for Self-Evolving Agents [57.1835920227202]
We propose MemGen, a dynamic generative memory framework that equips agents with a human-esque cognitive faculty.<n>MemGen enables agents to recall and augment latent memory throughout reasoning, producing a tightly interwoven cycle of memory and cognition.
arXiv Detail & Related papers (2025-09-29T12:33:13Z) - Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning [89.55738101744657]
Large Language Models (LLMs) have demonstrated impressive capabilities across a wide range of NLP tasks, but they remain fundamentally stateless.<n>We present Memory-R1, a reinforcement learning framework that equips LLMs with the ability to actively manage and utilize external memory.
arXiv Detail & Related papers (2025-08-27T12:26:55Z) - Hierarchical Memory for High-Efficiency Long-Term Reasoning in LLM Agents [19.04968632268433]
We propose a hierarchical memory architecture for Large Language Model Agents (LLM Agents)<n>Each memory vector is embedded with a positional index encoding pointing to its semantically related sub-memories in the next layer.<n>During the reasoning phase, an index-based routing mechanism enables efficient, layer-by-layer retrieval without performing exhaustive similarity computations.
arXiv Detail & Related papers (2025-07-23T12:45:44Z) - MEM1: Learning to Synergize Memory and Reasoning for Efficient Long-Horizon Agents [84.62985963113245]
We introduce MEM1, an end-to-end reinforcement learning framework that enables agents to operate with constant memory across long multi-turn tasks.<n>At each turn, MEM1 updates a compact shared internal state that jointly supports memory consolidation and reasoning.<n>We show that MEM1-7B improves performance by 3.5x while reducing memory usage by 3.7x compared to Qwen2.5-14B-Instruct on a 16-objective multi-hop QA task.
arXiv Detail & Related papers (2025-06-18T19:44:46Z) - MemOS: An Operating System for Memory-Augmented Generation (MAG) in Large Language Models [31.944531660401722]
We introduce MemOS, a memory operating system designed for Large Language Models (LLMs)<n>At its core is the MemCube, a standardized memory abstraction that enables tracking, fusion, and migration of heterogeneous memory.<n>MemOS establishes a memory-centric execution framework with strong controllability, adaptability, and evolvability.
arXiv Detail & Related papers (2025-05-28T08:27:12Z) - Rethinking Memory in AI: Taxonomy, Operations, Topics, and Future Directions [55.19217798774033]
Memory is a fundamental component of AI systems, underpinning large language models (LLMs)-based agents.<n>In this survey, we first categorize memory representations into parametric and contextual forms.<n>We then introduce six fundamental memory operations: Consolidation, Updating, Indexing, Forgetting, Retrieval, and Compression.
arXiv Detail & Related papers (2025-05-01T17:31:33Z) - From RAG to Memory: Non-Parametric Continual Learning for Large Language Models [6.380729797938521]
retrieval-augmented generation (RAG) has become the dominant way to introduce new information.<n>Recent RAG approaches augment vector embeddings with various structures like knowledge graphs to address some gaps, namely sense-making and associativity.<n>We propose HippoRAG 2, a framework that outperforms standard RAG comprehensively on factual, sense-making, and associative memory tasks.
arXiv Detail & Related papers (2025-02-20T18:26:02Z) - A-MEM: Agentic Memory for LLM Agents [42.50876509391843]
Large language model (LLM) agents require memory systems to leverage historical experiences.<n>Current memory systems enable basic storage and retrieval but lack sophisticated memory organization.<n>This paper proposes a novel agentic memory system for LLM agents that can dynamically organize memories in an agentic way.
arXiv Detail & Related papers (2025-02-17T18:36:14Z) - Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation [69.01029651113386]
Embodied-RAG is a framework that enhances the model of an embodied agent with a non-parametric memory system.<n>At its core, Embodied-RAG's memory is structured as a semantic forest, storing language descriptions at varying levels of detail.<n>We demonstrate that Embodied-RAG effectively bridges RAG to the robotics domain, successfully handling over 250 explanation and navigation queries.
arXiv Detail & Related papers (2024-09-26T21:44:11Z) - MemoRAG: Boosting Long Context Processing with Global Memory-Enhanced Retrieval Augmentation [60.04380907045708]
Retrieval-Augmented Generation (RAG) is considered a promising strategy to address this problem.<n>We propose MemoRAG, a novel RAG framework empowered by global memory-augmented retrieval.<n>MemoRAG achieves superior performances across a variety of long-context evaluation tasks.
arXiv Detail & Related papers (2024-09-09T13:20:31Z) - B'MOJO: Hybrid State Space Realizations of Foundation Models with Eidetic and Fading Memory [91.81390121042192]
We develop a class of models called B'MOJO to seamlessly combine eidetic and fading memory within an composable module.
B'MOJO's ability to modulate eidetic and fading memory results in better inference on longer sequences tested up to 32K tokens.
arXiv Detail & Related papers (2024-07-08T18:41:01Z) - Memory-Guided Semantic Learning Network for Temporal Sentence Grounding [55.31041933103645]
We propose a memory-augmented network that learns and memorizes the rarely appeared content in TSG tasks.
MGSL-Net consists of three main parts: a cross-modal inter-action module, a memory augmentation module, and a heterogeneous attention module.
arXiv Detail & Related papers (2022-01-03T02:32:06Z) - Memorizing Comprehensively to Learn Adaptively: Unsupervised
Cross-Domain Person Re-ID with Multi-level Memory [89.43986007948772]
We propose a novel multi-level memory network (MMN) to discover multi-level complementary information in the target domain.
Unlike the simple memory in previous works, we propose a novel multi-level memory network (MMN) to discover multi-level complementary information in the target domain.
arXiv Detail & Related papers (2020-01-13T09:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.