Call for Customized Conversation: Customized Conversation Grounding
Persona and Knowledge
- URL: http://arxiv.org/abs/2112.08619v1
- Date: Thu, 16 Dec 2021 04:44:27 GMT
- Title: Call for Customized Conversation: Customized Conversation Grounding
Persona and Knowledge
- Authors: Yoonna Jang, Jungwoo Lim, Yuna Hur, Dongsuk Oh, Suhyune Son, Yeonsoo
Lee, Donghoon Shin, Seungryong Kim, and Heuiseok Lim
- Abstract summary: We introduce a call For Customized conversation dataset where the customized answers are built with the user's persona and Wikipedia knowledge.
We evaluate the abilities to make informative and customized utterances of pre-trained language models.
- Score: 25.378474996192438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans usually have conversations by making use of prior knowledge about a
topic and background information of the people whom they are talking to.
However, existing conversational agents and datasets do not consider such
comprehensive information, and thus they have a limitation in generating the
utterances where the knowledge and persona are fused properly. To address this
issue, we introduce a call For Customized conversation (FoCus) dataset where
the customized answers are built with the user's persona and Wikipedia
knowledge. To evaluate the abilities to make informative and customized
utterances of pre-trained language models, we utilize BART and GPT-2 as well as
transformer-based models. We assess their generation abilities with automatic
scores and conduct human evaluations for qualitative results. We examine
whether the model reflects adequate persona and knowledge with our proposed two
sub-tasks, persona grounding (PG) and knowledge grounding (KG). Moreover, we
show that the utterances of our data are constructed with the proper knowledge
and persona through grounding quality assessment.
Related papers
- Learning from Implicit User Feedback, Emotions and Demographic Information in Task-Oriented and Document-Grounded Dialogues [52.95506649193427]
We introduce FEDI, the first English task-oriented and document-grounded dialogue dataset annotated with this information.
Experiments with Flan-T5, GPT-2 and Llama 2 show a particularly positive impact on task completion and factual consistency.
arXiv Detail & Related papers (2024-01-17T14:52:26Z) - The Knowledge Alignment Problem: Bridging Human and External Knowledge for Large Language Models [65.80573571314534]
We introduce MixAlign, a framework that interacts with both the human user and the knowledge base to obtain and integrate clarifications on how the user question relates to the stored information.
Experimental results highlight the crucial role of knowledge alignment in boosting model performance and mitigating hallucination, with improvements noted up to 22.2% and 27.1% respectively.
arXiv Detail & Related papers (2023-05-23T04:22:50Z) - You Truly Understand What I Need: Intellectual and Friendly Dialogue
Agents grounding Knowledge and Persona [30.30372603825815]
We propose an effective dialogue agent that grounds external knowledge and persona simultaneously.
The agent selects the proper knowledge and persona to use for generating the answers with our candidate scoring implemented with a poly-encoder.
We conduct experiments on the persona-knowledge chat and achieve state-of-the-art performance in grounding and generation tasks.
arXiv Detail & Related papers (2023-01-06T06:47:21Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - Persona-Knowledge Dialogue Multi-Context Retrieval and Enhanced Decoding
Methods [1.066048003460524]
We tackle Persona-Knowledge identification and response generation tasks.
We design an informed data augmentation strategy that is compatible with neural Q&A retrieval models.
We achieve SOTA across official metrics with 93.99% Grounding accuracy average and 23.62 SacreBLEU score.
arXiv Detail & Related papers (2022-07-28T07:19:08Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - There Are a Thousand Hamlets in a Thousand People's Eyes: Enhancing
Knowledge-grounded Dialogue with Personal Memory [67.24942840683904]
We introduce personal memory into knowledge selection in Knowledge-grounded conversation.
We devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop.
Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation.
arXiv Detail & Related papers (2022-04-06T07:06:37Z) - Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for
Open-domain Dialogue Generation [11.72386584395626]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation.
We propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds.
arXiv Detail & Related papers (2021-07-16T08:59:06Z) - Human-like informative conversations: Better acknowledgements using
conditional mutual information [0.0]
This work aims to build a dialogue agent that can weave new factual content into conversations as naturally as humans.
We draw insights from linguistic principles of conversational analysis and annotate human-human conversations from the Switchboard Dialog Act Corpus.
arXiv Detail & Related papers (2021-04-16T00:13:57Z) - Information Seeking in the Spirit of Learning: a Dataset for
Conversational Curiosity [10.409312809724458]
We design a Wizard-of-Oz dialog task that tests the hypothesis that engagement increases when users are presented with facts related to what they know.
We collect and release 14K dialogs (181K utterances) where users and assistants converse about geographic topics.
This dataset is annotated with pre-existing user knowledge, message-level dialog acts, grounding to Wikipedia, and user reactions to messages.
arXiv Detail & Related papers (2020-05-01T01:55:09Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.