You Truly Understand What I Need: Intellectual and Friendly Dialogue
Agents grounding Knowledge and Persona
- URL: http://arxiv.org/abs/2301.02401v1
- Date: Fri, 6 Jan 2023 06:47:21 GMT
- Title: You Truly Understand What I Need: Intellectual and Friendly Dialogue
Agents grounding Knowledge and Persona
- Authors: Jungwoo Lim, Myunghoon Kang, Yuna Hur, Seungwon Jung, Jinsung Kim,
Yoonna Jang, Dongyub Lee, Hyesung Ji, Donghoon Shin, Seungryong Kim, and
Heuiseok Lim
- Abstract summary: We propose an effective dialogue agent that grounds external knowledge and persona simultaneously.
The agent selects the proper knowledge and persona to use for generating the answers with our candidate scoring implemented with a poly-encoder.
We conduct experiments on the persona-knowledge chat and achieve state-of-the-art performance in grounding and generation tasks.
- Score: 30.30372603825815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To build a conversational agent that interacts fluently with humans, previous
studies blend knowledge or personal profile into the pre-trained language
model. However, the model that considers knowledge and persona at the same time
is still limited, leading to hallucination and a passive way of using personas.
We propose an effective dialogue agent that grounds external knowledge and
persona simultaneously. The agent selects the proper knowledge and persona to
use for generating the answers with our candidate scoring implemented with a
poly-encoder. Then, our model generates the utterance with lesser hallucination
and more engagingness utilizing retrieval augmented generation with
knowledge-persona enhanced query. We conduct experiments on the
persona-knowledge chat and achieve state-of-the-art performance in grounding
and generation tasks on the automatic metrics. Moreover, we validate the
answers from the models regarding hallucination and engagingness through human
evaluation and qualitative results. We show our retriever's effectiveness in
extracting relevant documents compared to the other previous retrievers, along
with the comparison of multiple candidate scoring methods. Code is available at
https://github.com/dlawjddn803/INFO
Related papers
- Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - Improving Multimodal Interactive Agents with Reinforcement Learning from
Human Feedback [16.268581985382433]
An important goal in artificial intelligence is to create agents that can both interact naturally with humans and learn from their feedback.
Here we demonstrate how to use reinforcement learning from human feedback to improve upon simulated, embodied agents.
arXiv Detail & Related papers (2022-11-21T16:00:31Z) - Grounding in social media: An approach to building a chit-chat dialogue
model [9.247397520986999]
Building open-domain dialogue systems capable of rich human-like conversational ability is one of the fundamental challenges in language generation.
Current work on knowledge-grounded dialogue generation primarily focuses on persona incorporation or searching a fact-based structured knowledge source such as Wikipedia.
Our method takes a broader and simpler approach, which aims to improve the raw conversation ability of the system by mimicking the human response behavior on social media.
arXiv Detail & Related papers (2022-06-12T09:01:57Z) - Towards Building a Personalized Dialogue Generator via Implicit User
Persona Detection [0.0]
We consider high-quality transmission is essentially built based on apprehending the persona of the other party.
Motivated by this, we propose a novel personalized dialogue generator by detecting implicit user persona.
arXiv Detail & Related papers (2022-04-15T08:12:10Z) - Call for Customized Conversation: Customized Conversation Grounding
Persona and Knowledge [25.378474996192438]
We introduce a call For Customized conversation dataset where the customized answers are built with the user's persona and Wikipedia knowledge.
We evaluate the abilities to make informative and customized utterances of pre-trained language models.
arXiv Detail & Related papers (2021-12-16T04:44:27Z) - Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for
Open-domain Dialogue Generation [11.72386584395626]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation.
We propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds.
arXiv Detail & Related papers (2021-07-16T08:59:06Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.