Keep Me Updated! Memory Management in Long-term Conversations
- URL: http://arxiv.org/abs/2210.08750v1
- Date: Mon, 17 Oct 2022 05:06:38 GMT
- Title: Keep Me Updated! Memory Management in Long-term Conversations
- Authors: Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong
Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, Woomyoung Park and Nako Sung
- Abstract summary: We present a novel task and a dataset of memory management in long-term conversations.
We propose a new mechanism of memory management that eliminates invalidated or redundant information.
Experimental results show that our approach outperforms the baselines in terms of engagingness and humanness.
- Score: 14.587940208778843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Remembering important information from the past and continuing to talk about
it in the present are crucial in long-term conversations. However, previous
literature does not deal with cases where the memorized information is
outdated, which may cause confusion in later conversations. To address this
issue, we present a novel task and a corresponding dataset of memory management
in long-term conversations, in which bots keep track of and bring up the latest
information about users while conversing through multiple sessions. In order to
support more precise and interpretable memory, we represent memory as
unstructured text descriptions of key information and propose a new mechanism
of memory management that selectively eliminates invalidated or redundant
information. Experimental results show that our approach outperforms the
baselines that leave the stored memory unchanged in terms of engagingness and
humanness, with larger performance gap especially in the later sessions.
Related papers
- LongMemEval: Benchmarking Chat Assistants on Long-Term Interactive Memory [68.97819665784442]
This paper introduces LongMemEval, a benchmark designed to evaluate five core long-term memory abilities of chat assistants.
LongMemEval presents a significant challenge to existing long-term memory systems.
We present a unified framework that breaks down the long-term memory design into four design choices.
arXiv Detail & Related papers (2024-10-14T17:59:44Z) - Introducing MeMo: A Multimodal Dataset for Memory Modelling in Multiparty Conversations [1.8896253910986929]
MeMo corpus is the first dataset annotated with participants' memory retention reports.
It integrates validated behavioural and perceptual measures, audio, video, and multimodal annotations.
This paper aims to pave the way for future research in conversational memory modelling for intelligent system development.
arXiv Detail & Related papers (2024-09-07T16:09:36Z) - Ever-Evolving Memory by Blending and Refining the Past [30.63352929849842]
CREEM is a novel memory system for long-term conversation.
It seamlessly connects past and present information, while also possessing the ability to forget obstructive information.
arXiv Detail & Related papers (2024-03-03T08:12:59Z) - Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations [39.05338079159942]
This study introduces a novel framework, COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews traditional retrieval modules and memory databases.
Central to COMEDY is the concept of compressive memory, which intergrates session-specific summaries, user-bot dynamics, and past events into a concise memory format.
arXiv Detail & Related papers (2024-02-19T09:19:50Z) - Recursively Summarizing Enables Long-Term Dialogue Memory in Large
Language Models [75.98775135321355]
Given a long conversation, large language models (LLMs) fail to recall past information and tend to generate inconsistent responses.
We propose to generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability.
arXiv Detail & Related papers (2023-08-29T04:59:53Z) - Enhancing Large Language Model with Self-Controlled Memory Framework [56.38025154501917]
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.
We propose the Self-Controlled Memory (SCM) framework to enhance the ability of LLMs to maintain long-term memory and recall relevant information.
arXiv Detail & Related papers (2023-04-26T07:25:31Z) - LaMemo: Language Modeling with Look-Ahead Memory [50.6248714811912]
We propose Look-Ahead Memory (LaMemo) that enhances the recurrence memory by incrementally attending to the right-side tokens.
LaMemo embraces bi-directional attention and segment recurrence with an additional overhead only linearly proportional to the memory length.
Experiments on widely used language modeling benchmarks demonstrate its superiority over the baselines equipped with different types of memory.
arXiv Detail & Related papers (2022-04-15T06:11:25Z) - Learning to Rehearse in Long Sequence Memorization [107.14601197043308]
Existing reasoning tasks often have an important assumption that the input contents can be always accessed while reasoning.
Memory augmented neural networks introduce a human-like write-read memory to compress and memorize the long input sequence in one pass.
But they have two serious drawbacks: 1) they continually update the memory from current information and inevitably forget the early contents; 2) they do not distinguish what information is important and treat all contents equally.
We propose the Rehearsal Memory to enhance long-sequence memorization by self-supervised rehearsal with a history sampler.
arXiv Detail & Related papers (2021-06-02T11:58:30Z) - Not All Memories are Created Equal: Learning to Forget by Expiring [49.053569908417636]
We propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information.
This forgetting of memories enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently.
We show that Expire-Span can scale to memories that are tens of thousands in size, setting a new state of the art on incredibly long context tasks.
arXiv Detail & Related papers (2021-05-13T20:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.