There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory
- URL: http://arxiv.org/abs/2204.02624v1
- Date: Wed, 6 Apr 2022 07:06:37 GMT
- Title: There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory
- Authors: Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, Rui Yan
- Abstract summary: We introduce personal memory into knowledge selection in Knowledge-grounded conversation.
We devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop.
Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation.
- Score: 67.24942840683904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge-grounded conversation (KGC) shows great potential in building an
engaging and knowledgeable chatbot, and knowledge selection is a key ingredient
in it. However, previous methods for knowledge selection only concentrate on
the relevance between knowledge and dialogue context, ignoring the fact that
age, hobby, education and life experience of an interlocutor have a major
effect on his or her personal preference over external knowledge. Without
taking the personalization issue into account, it is difficult to select the
proper knowledge and generate persona-consistent responses. In this work, we
introduce personal memory into knowledge selection in KGC to address the
personalization issue. We propose a variational method to model the underlying
relationship between one's personal memory and his or her selection of
knowledge, and devise a learning scheme in which the forward mapping from
personal memory to knowledge and its inverse mapping is included in a closed
loop so that they could teach each other. Experiment results show that our
method outperforms existing KGC methods significantly on both automatic
evaluation and human evaluation.
Related papers
- Stable Knowledge Editing in Large Language Models [68.98582618305679]
We introduce StableKE, a knowledge editing method based on knowledge augmentation rather than knowledge localization.
To overcome the expense of human labeling, StableKE integrates two automated knowledge augmentation strategies.
StableKE surpasses other knowledge editing methods, demonstrating stability both edited knowledge and multi-hop knowledge.
arXiv Detail & Related papers (2024-02-20T14:36:23Z) - Personalized Large Language Model Assistant with Evolving Conditional Memory [15.780762727225122]
We present a plug-and-play framework that could facilitate personalized large language model assistants with evolving conditional memory.
The personalized assistant focuses on intelligently preserving the knowledge and experience from the history dialogue with the user.
arXiv Detail & Related papers (2023-12-22T02:39:15Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection [1.1633929083694388]
Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
arXiv Detail & Related papers (2021-08-31T08:53:08Z) - Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for
Open-domain Dialogue Generation [11.72386584395626]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation.
We propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds.
arXiv Detail & Related papers (2021-07-16T08:59:06Z) - Difference-aware Knowledge Selection for Knowledge-grounded Conversation
Generation [101.48602006200409]
We propose a difference-aware knowledge selection method for multi-turn knowledge-grounded dialogs.
It first computes the difference between the candidate knowledge sentences provided at the current turn and those chosen in the previous turns.
Then, the differential information is fused with or disentangled from the contextual information to facilitate final knowledge selection.
arXiv Detail & Related papers (2020-09-20T07:47:26Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.