A Proposal for Intelligent Agents with Episodic Memory
- URL: http://arxiv.org/abs/2005.03182v1
- Date: Thu, 7 May 2020 00:26:42 GMT
- Title: A Proposal for Intelligent Agents with Episodic Memory
- Authors: David Murphy and Thomas S. Paula and Wagston Staehler and Juliano
Vacaro and Gabriel Paz and Guilherme Marques and Bruna Oliveira
- Abstract summary: We argue that an agent would benefit from an episodic memory.
This memory encodes the agent's experience in such a way that the agent can relive the experience.
We propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories.
- Score: 0.9236074230806579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the future we can expect that artificial intelligent agents, once
deployed, will be required to learn continually from their experience during
their operational lifetime. Such agents will also need to communicate with
humans and other agents regarding the content of their experience, in the
context of passing along their learnings, for the purpose of explaining their
actions in specific circumstances or simply to relate more naturally to humans
concerning experiences the agent acquires that are not necessarily related to
their assigned tasks. We argue that to support these goals, an agent would
benefit from an episodic memory; that is, a memory that encodes the agent's
experience in such a way that the agent can relive the experience, communicate
about it and use its past experience, inclusive of the agents own past actions,
to learn more effective models and policies. In this short paper, we propose
one potential approach to provide an AI agent with such capabilities. We draw
upon the ever-growing body of work examining the function and operation of the
Medial Temporal Lobe (MTL) in mammals to guide us in adding an episodic memory
capability to an AI agent composed of artificial neural networks (ANNs). Based
on that, we highlight important aspects to be considered in the memory
organization and we propose an architecture combining ANNs and standard
Computer Science techniques for supporting storage and retrieval of episodic
memories. Despite being initial work, we hope this short paper can spark
discussions around the creation of intelligent agents with memory or, at least,
provide a different point of view on the subject.
Related papers
- Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)
It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.
The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z) - A Survey on the Memory Mechanism of Large Language Model based Agents [66.4963345269611]
Large language model (LLM) based agents have recently attracted much attention from the research and industry communities.
LLM-based agents are featured in their self-evolving capability, which is the basis for solving real-world problems.
The key component to support agent-environment interactions is the memory of the agents.
arXiv Detail & Related papers (2024-04-21T01:49:46Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - An In-depth Survey of Large Language Model-based Artificial Intelligence
Agents [11.774961923192478]
We have explored the core differences and characteristics between LLM-based AI agents and traditional AI agents.
We conducted an in-depth analysis of the key components of AI agents, including planning, memory, and tool use.
arXiv Detail & Related papers (2023-09-23T11:25:45Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - A Machine with Short-Term, Episodic, and Semantic Memory Systems [9.42475956340287]
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems.
Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
arXiv Detail & Related papers (2022-12-05T08:34:23Z) - DANLI: Deliberative Agent for Following Natural Language Instructions [9.825482203664963]
We propose a neuro-symbolic deliberative agent that applies reasoning and planning based on its neural and symbolic representations acquired from past experience.
We show that our deliberative agent achieves greater than 70% improvement over reactive baselines on the challenging TEACh benchmark.
arXiv Detail & Related papers (2022-10-22T15:57:01Z) - Quantum adaptive agents with efficient long-term memories [0.0]
More information the agent must recall from its past experiences, the more memory it will need.
We uncover the most general form a quantum agent need adopt to maximise memory compression advantages.
We show these encodings can exhibit extremely favourable scaling advantages relative to memory-minimal classical agents.
arXiv Detail & Related papers (2021-08-24T17:57:05Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.