EMemBench: Interactive Benchmarking of Episodic Memory for VLM Agents
- URL: http://arxiv.org/abs/2601.16690v1
- Date: Fri, 23 Jan 2026 12:09:59 GMT
- Title: EMemBench: Interactive Benchmarking of Episodic Memory for VLM Agents
- Authors: Xinze Li, Ziyue Zhu, Siyuan Liu, Yubo Ma, Yuhang Zang, Yixin Cao, Aixin Sun,
- Abstract summary: We introduce EMemBench, a programmatic benchmark for evaluating long-term memory of agents through interactive games.<n>Rather than using a fixed set of questions, EMemBench generates questions from each agent's own trajectory.<n>Each template computes verifiable ground truth from underlying game signals.
- Score: 52.567469286881426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce EMemBench, a programmatic benchmark for evaluating long-term memory of agents through interactive games. Rather than using a fixed set of questions, EMemBench generates questions from each agent's own trajectory, covering both text and visual game environments. Each template computes verifiable ground truth from underlying game signals, with controlled answerability and balanced coverage over memory skills: single/multi-hop recall, induction, temporal, spatial, logical, and adversarial. We evaluate memory agents with strong LMs/VLMs as backbones, using in-context prompting as baselines. Across 15 text games and multiple visual seeds, results are far from saturated: induction and spatial reasoning are persistent bottlenecks, especially in visual setting. Persistent memory yields clear gains for open backbones on text games, but improvements are less consistent for VLM agents, suggesting that visually grounded episodic memory remains an open challenge. A human study further confirms the difficulty of EMemBench.
Related papers
- From Verbatim to Gist: Distilling Pyramidal Multimodal Memory via Semantic Information Bottleneck for Long-Horizon Video Agents [78.30630000529133]
We propose MM-Mem, a pyramidal multimodal memory architecture grounded in Fuzzy-Trace Theory.<n> MM-Mem memory structures hierarchically into a Sensory Buffer, Episodic Stream, and Symbolic.<n>Experiments confirm the effectiveness of MM-Mem on both offline and streaming tasks.
arXiv Detail & Related papers (2026-03-02T05:12:45Z) - MemoryArena: Benchmarking Agent Memory in Interdependent Multi-Session Agentic Tasks [55.145729491377374]
Existing evaluations of agents with memory typically assess memorization and action in isolation.<n>We introduce MemoryArena, a unified evaluation gym for benchmarking agent memory in multi-session Memory-Agent-Environment loops.<n> MemoryArena supports evaluation across web navigation, preference-constrained planning, progressive information search, and sequential formal reasoning.
arXiv Detail & Related papers (2026-02-18T09:49:14Z) - REMem: Reasoning with Episodic Memory in Language Agent [32.63834745610879]
We present REMem, a framework for constructing and reasoning with episodic memory.<n>We show that REMem substantially outperforms state-of-the-temporal-art memory systems such as Mem0 and HippoRAG 2.<n>REMem also demonstrates more robust refusal behavior for unanswerable questions.
arXiv Detail & Related papers (2026-02-13T23:54:55Z) - EverMemBench: Benchmarking Long-Term Interactive Memory in Large Language Models [16.865998112859604]
We introduce EverMemBench, a benchmark featuring multi-party, multi-group conversations spanning over 1 million tokens.<n>EverMemBench evaluates memory systems across three dimensions through 1,000+ QA pairs.
arXiv Detail & Related papers (2026-02-01T16:13:08Z) - RealMem: Benchmarking LLMs in Real-World Memory-Driven Interaction [21.670389104174536]
We introduce **RealMem**, the first benchmark grounded in realistic project scenarios.<n>RealMem comprises over 2,000 cross-session dialogues across eleven scenarios, utilizing natural user queries for evaluation.<n>We propose a pipeline that integrates Project Foundation Construction, Multi-Agent Dialogue Generation, and Memory synthesis and Schedule Management to simulate the dynamic evolution of memory.
arXiv Detail & Related papers (2026-01-11T15:49:36Z) - EvolMem: A Cognitive-Driven Benchmark for Multi-Session Dialogue Memory [63.84216832544323]
EvolMem is a new benchmark for assessing multi-session memory capabilities of large language models (LLMs) and agent systems.<n>To construct the benchmark, we introduce a hybrid data synthesis framework that consists of topic-initiated generation and narrative-inspired transformations.<n>Extensive evaluation reveals that no LLM consistently outperforms others across all memory dimensions.
arXiv Detail & Related papers (2026-01-07T03:14:42Z) - WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning [66.24870234484668]
We introduce WorldMM, a novel multimodal memory agent that constructs and retrieves from multiple complementary memories.<n>WorldMM significantly outperforms existing baselines across five long video question-answering benchmarks.
arXiv Detail & Related papers (2025-12-02T05:14:52Z) - Agentic Learner with Grow-and-Refine Multimodal Semantic Memory [50.81667005063605]
ViLoMem is a dual-stream memory framework that constructs compact, schema-based memory.<n>It encodes visual distraction patterns and logical reasoning errors, enabling MLLMs to learn from their successful and failed experiences.
arXiv Detail & Related papers (2025-11-26T18:55:08Z) - Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory [89.65731902036669]
Evo-Memory is a streaming benchmark and framework for evaluating self-evolving memory in large language model (LLM) agents.<n>We evaluate over ten representative memory modules and evaluate them across 10 diverse multi-turn goal-oriented and single-turn reasoning and QA datasets.
arXiv Detail & Related papers (2025-11-25T21:08:07Z) - Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs [28.807582003957005]
We present a framework for evaluating the abilities of large language models (LLMs) for tasks that require long-term memory and thus long-context reasoning.<n>We then construct BEAM, a new benchmark comprising 100 conversations and 2,000 validated questions.<n>To enhance model performance, we propose LIGHT-a framework inspired by human cognition that equips LLMs with three complementary memory systems.
arXiv Detail & Related papers (2025-10-31T07:29:52Z) - Evaluating Long-Term Memory for Long-Context Question Answering [100.1267054069757]
We present a systematic evaluation of memory-augmented methods using LoCoMo, a benchmark of synthetic long-context dialogues annotated for question-answering tasks.<n>Our findings show that memory-augmented approaches reduce token usage by over 90% while maintaining competitive accuracy.
arXiv Detail & Related papers (2025-10-27T18:03:50Z) - ArcMemo: Abstract Reasoning Composition with Lifelong LLM Memory [21.4675019810992]
Concept-level memory is reusable, modular abstractions distilled from solution traces and stored in natural language.<n>We evaluate on ARC-AGI, a benchmark that stresses compositional generalization and abstract reasoning.<n>We find abstract concepts to be the most consistent memory design, outscoring the baseline at all tested inference compute scales.
arXiv Detail & Related papers (2025-09-04T17:54:19Z) - MADial-Bench: Towards Real-world Evaluation of Memory-Augmented Dialogue Generation [15.64077949677469]
We present a novel Memory-Augmented Dialogue Benchmark (MADail-Bench) to evaluate the effectiveness of memory-augmented dialogue systems (MADS)
The benchmark assesses two tasks separately: memory retrieval and memory recognition with the incorporation of both passive and proactive memory recall data.
Results from cutting-edge embedding models and large language models on this benchmark indicate the potential for further advancement.
arXiv Detail & Related papers (2024-09-23T17:38:41Z) - Evaluating Long-Term Memory in 3D Mazes [10.224858246626171]
Memory Maze is a 3D domain of randomized mazes designed for evaluating long-term memory in agents.
Unlike existing benchmarks, Memory Maze measures long-term memory separate from confounding agent abilities.
We find that current algorithms benefit from training with truncated backpropagation through time and succeed on small mazes, but fall short of human performance on the large mazes.
arXiv Detail & Related papers (2022-10-24T16:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.