Evaluating Long-Term Memory in 3D Mazes
- URL: http://arxiv.org/abs/2210.13383v1
- Date: Mon, 24 Oct 2022 16:32:28 GMT
- Title: Evaluating Long-Term Memory in 3D Mazes
- Authors: Jurgis Pasukonis, Timothy Lillicrap, Danijar Hafner
- Abstract summary: Memory Maze is a 3D domain of randomized mazes designed for evaluating long-term memory in agents.
Unlike existing benchmarks, Memory Maze measures long-term memory separate from confounding agent abilities.
We find that current algorithms benefit from training with truncated backpropagation through time and succeed on small mazes, but fall short of human performance on the large mazes.
- Score: 10.224858246626171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent agents need to remember salient information to reason in
partially-observed environments. For example, agents with a first-person view
should remember the positions of relevant objects even if they go out of view.
Similarly, to effectively navigate through rooms agents need to remember the
floor plan of how rooms are connected. However, most benchmark tasks in
reinforcement learning do not test long-term memory in agents, slowing down
progress in this important research direction. In this paper, we introduce the
Memory Maze, a 3D domain of randomized mazes specifically designed for
evaluating long-term memory in agents. Unlike existing benchmarks, Memory Maze
measures long-term memory separate from confounding agent abilities and
requires the agent to localize itself by integrating information over time.
With Memory Maze, we propose an online reinforcement learning benchmark, a
diverse offline dataset, and an offline probing evaluation. Recording a human
player establishes a strong baseline and verifies the need to build up and
retain memories, which is reflected in their gradually increasing rewards
within each episode. We find that current algorithms benefit from training with
truncated backpropagation through time and succeed on small mazes, but fall
short of human performance on the large mazes, leaving room for future
algorithmic designs to be evaluated on the Memory Maze.
Related papers
- LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory [68.97819665784442]
This paper introduces LongMemEval, a benchmark designed to evaluate five core long-term memory abilities of chat assistants.
LongMemEval presents a significant challenge to existing long-term memory systems.
We present a unified framework that breaks down the long-term memory design into four design choices.
arXiv Detail & Related papers (2024-10-14T17:59:44Z) - Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks [42.22616978679253]
We introduce Sequence Order Recall Tasks (SORT), which we adapt from tasks used to study episodic memory in cognitive psychology.
SORT requires LLMs to recall the correct order of text segments, and provides a general framework that is both easily extendable and does not require any additional annotations.
Based on a human experiment with 155 participants, we show that humans can recall sequence order based on long-term memory of a book.
arXiv Detail & Related papers (2024-10-10T17:17:38Z) - Saliency-Augmented Memory Completion for Continual Learning [8.243137410556495]
How to forget is a problem continual learning must address.
Our paper proposes a new saliency-augmented memory completion framework for continual learning.
arXiv Detail & Related papers (2022-12-26T18:06:39Z) - A Machine with Short-Term, Episodic, and Semantic Memory Systems [9.42475956340287]
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems.
Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
arXiv Detail & Related papers (2022-12-05T08:34:23Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Learning to Rehearse in Long Sequence Memorization [107.14601197043308]
Existing reasoning tasks often have an important assumption that the input contents can be always accessed while reasoning.
Memory augmented neural networks introduce a human-like write-read memory to compress and memorize the long input sequence in one pass.
But they have two serious drawbacks: 1) they continually update the memory from current information and inevitably forget the early contents; 2) they do not distinguish what information is important and treat all contents equally.
We propose the Rehearsal Memory to enhance long-sequence memorization by self-supervised rehearsal with a history sampler.
arXiv Detail & Related papers (2021-06-02T11:58:30Z) - Towards mental time travel: a hierarchical memory for reinforcement
learning agents [9.808027857786781]
Reinforcement learning agents often forget details of the past, especially after delays or distractor tasks.
We propose a Hierarchical Transformer Memory (HTM) which helps agents to remember the past in detail.
Agents with HTM can extrapolate to task sequences an order of magnitude longer than they were trained on, and can even generalize zero-shot from a meta-learning setting to maintaining knowledge across episodes.
arXiv Detail & Related papers (2021-05-28T18:12:28Z) - Not All Memories are Created Equal: Learning to Forget by Expiring [49.053569908417636]
We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information.
This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently.
We show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks.
arXiv Detail & Related papers (2021-05-13T20:50:13Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.