Think Before You Speak: Explicitly Generating Implicit Commonsense
Knowledge for Response Generation
- URL: http://arxiv.org/abs/2110.08501v4
- Date: Mon, 11 Sep 2023 21:02:01 GMT
- Title: Think Before You Speak: Explicitly Generating Implicit Commonsense
Knowledge for Response Generation
- Authors: Pei Zhou, Karthik Gopalakrishnan, Behnam Hedayatnia, Seokhwan Kim, Jay
Pujara, Xiang Ren, Yang Liu, Dilek Hakkani-Tur
- Abstract summary: Implicit knowledge, such as common sense, is key to fluid human conversations.
In this paper, we present Think-Before-Speaking (TBS), a generative approach to first externalize implicit commonsense knowledge (think) and use this knowledge to generate responses (speak)
Empirical results show TBS models outperform end-to-end and knowledge-augmented RG baselines on most automatic metrics.
- Score: 45.86667254934832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit knowledge, such as common sense, is key to fluid human
conversations. Current neural response generation (RG) models are trained to
generate responses directly, omitting unstated implicit knowledge. In this
paper, we present Think-Before-Speaking (TBS), a generative approach to first
externalize implicit commonsense knowledge (think) and use this knowledge to
generate responses (speak). We expect that externalizing implicit knowledge
allows more efficient learning, produces more informative responses, and
enables more explainable models. We analyze different choices to collect
knowledge-aligned dialogues, represent implicit knowledge, and transition
between knowledge and dialogues. Empirical results show TBS models outperform
end-to-end and knowledge-augmented RG baselines on most automatic metrics and
generate more informative, specific, and commonsense-following responses, as
evaluated by human annotators. TBS also generates knowledge that makes sense
and is relevant to the dialogue around 85\% of the time.
Related papers
- SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - KPT: Keyword-guided Pre-training for Grounded Dialog Generation [82.68787152707455]
We propose KPT (guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation.
Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords.
We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages.
arXiv Detail & Related papers (2022-12-04T04:05:01Z) - RHO ($\rho$): Reducing Hallucination in Open-domain Dialogues with
Knowledge Grounding [57.46495388734495]
This paper presents RHO ($rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG)
We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism.
arXiv Detail & Related papers (2022-12-03T10:36:34Z) - RT-KGD: Relation Transition Aware Knowledge-Grounded Dialogue Generation [20.37399983466163]
We propose a Relation Transition aware Knowledge-Grounded Dialogue Generation model (RT-KGD)
Specifically, inspired by the latent logic of human conversation, our model integrates dialogue-level relation transition regularities with turn-level entity semantic information.
In this manner, the interaction between knowledge is considered to produce abundant clues for predicting the appropriate knowledge and generating coherent responses.
arXiv Detail & Related papers (2022-07-17T16:07:38Z) - Knowledge-Grounded Dialogue with Reward-Driven Knowledge Selection [1.1633929083694388]
Knoformer is a dialogue response generation model based on reinforcement learning.
It can automatically select one or more related knowledge from the knowledge pool and does not need knowledge labels during training.
arXiv Detail & Related papers (2021-08-31T08:53:08Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue [51.513276162736844]
We propose a sequential latent variable model as the first approach to this matter.
The model named sequential knowledge transformer (SKT) can keep track of the prior and posterior distribution over knowledge.
arXiv Detail & Related papers (2020-02-18T11:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.