Commonsense-Aware Prompting for Controllable Empathetic Dialogue
Generation
- URL: http://arxiv.org/abs/2302.01441v1
- Date: Thu, 2 Feb 2023 22:04:07 GMT
- Title: Commonsense-Aware Prompting for Controllable Empathetic Dialogue
Generation
- Authors: Yiren Liu, Halil Kilicoglu
- Abstract summary: We propose a novel framework that improves empathetic dialogue generation using pre-trained language models.
We conducted experiments to reveal that both the incorporation of social commonsense knowledge and enforcement of control over generation help to improve generation performance.
- Score: 1.0558951653323283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving the emotional awareness of pre-trained language models is an
emerging important problem for dialogue generation tasks. Although prior
studies have introduced methods to improve empathetic dialogue generation, few
have discussed how to incorporate commonsense knowledge into pre-trained
language models for controllable dialogue generation. In this study, we propose
a novel framework that improves empathetic dialogue generation using
pre-trained language models by 1) incorporating commonsense knowledge through
prompt verbalization, and 2) controlling dialogue generation using a
strategy-driven future discriminator. We conducted experiments to reveal that
both the incorporation of social commonsense knowledge and enforcement of
control over generation help to improve generation performance. Finally, we
discuss the implications of our study for future research.
Related papers
- Towards Harnessing Large Language Models for Comprehension of Conversational Grounding [1.8434042562191812]
This study investigates the capabilities of large language models in classifying dialogue turns related to explicit or implicit grounding and predicting grounded knowledge elements.
Our experimental results reveal challenges encountered by large language models in the two tasks.
These initiatives aim to develop more effective dialogue systems that are better equipped to handle the intricacies of grounded knowledge in conversations.
arXiv Detail & Related papers (2024-06-03T19:34:39Z) - FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for
Task-Oriented Dialogue [20.79359173822053]
We propose a novel dialogue pre-training model, FutureTOD, which distills future knowledge to the representation of the previous dialogue context.
Our intuition is that a good dialogue representation both learns local context information and predicts future information.
arXiv Detail & Related papers (2023-06-17T10:40:07Z) - Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence.
This paper reviews the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios.
arXiv Detail & Related papers (2021-10-11T03:52:37Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Knowledge-Grounded Dialogue Generation with Pre-trained Language Models [74.09352261943911]
We study knowledge-grounded dialogue generation with pre-trained language models.
We propose equipping response generation defined by a pre-trained language model with a knowledge selection module.
arXiv Detail & Related papers (2020-10-17T16:49:43Z) - Ranking Enhanced Dialogue Generation [77.8321855074999]
How to effectively utilize the dialogue history is a crucial problem in multi-turn dialogue generation.
Previous works usually employ various neural network architectures to model the history.
This paper proposes a Ranking Enhanced Dialogue generation framework.
arXiv Detail & Related papers (2020-08-13T01:49:56Z) - Knowledge Injection into Dialogue Generation via Language Models [85.65843021510521]
InjK is a two-stage approach to inject knowledge into a dialogue generation model.
First, we train a large-scale language model and query it as textual knowledge.
Second, we frame a dialogue generation model to sequentially generate textual knowledge and a corresponding response.
arXiv Detail & Related papers (2020-04-30T07:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.