Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness
- URL: http://arxiv.org/abs/2004.05816v2
- Date: Tue, 6 Oct 2020 08:20:22 GMT
- Title: Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness
- Authors: Hyunwoo Kim, Byeongchang Kim, Gunhee Kim
- Abstract summary: Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
- Score: 62.55060760615656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the task of improving persona consistency of dialogue agents.
Recent models tackling consistency often train with additional Natural Language
Inference (NLI) labels or attach trained extra modules to the generative agent
for maintaining consistency. However, such additional labels and training can
be demanding. Also, we find even the best-performing persona-based agents are
insensitive to contradictory words. Inspired by social cognition and
pragmatics, we endow existing dialogue agents with public self-consciousness on
the fly through an imaginary listener. Our approach, based on the Rational
Speech Acts framework (Frank and Goodman, 2012), can enforce dialogue agents to
refrain from uttering contradiction. We further extend the framework by
learning the distractor selection, which has been usually done manually or
randomly. Results on Dialogue NLI (Welleck et al., 2019) and PersonaChat (Zhang
et al., 2018) dataset show that our approach reduces contradiction and improves
consistency of existing dialogue models. Moreover, we show that it can be
generalized to improve context-consistency beyond persona in dialogues.
Related papers
- Talk With Human-like Agents: Empathetic Dialogue Through Perceptible Acoustic Reception and Reaction [23.115506530649988]
PerceptiveAgent is an empathetic multi-modal dialogue system designed to discern deeper or more subtle meanings.
PerceptiveAgent perceives acoustic information from input speech and generates empathetic responses based on speaking styles described in natural language.
arXiv Detail & Related papers (2024-06-18T15:19:51Z) - Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue [13.774377524019723]
We propose an efficient Multi-round Interactive Dialogue Tuning (Midi-Tuning) framework.
It models the agent and user individually with two adapters built upon large language models.
Our framework performs superior to traditional fine-tuning and harbors the tremendous potential for improving dialogue consistency.
arXiv Detail & Related papers (2024-02-10T14:52:52Z) - Controllable Mixed-Initiative Dialogue Generation through Prompting [50.03458333265885]
Mixed-initiative dialogue tasks involve repeated exchanges of information and conversational control.
Agents gain control by generating responses that follow particular dialogue intents or strategies, prescribed by a policy planner.
Standard approach has been fine-tuning pre-trained language models to perform generation conditioned on these intents.
We instead prompt large language models as a drop-in replacement to fine-tuning on conditional generation.
arXiv Detail & Related papers (2023-05-06T23:11:25Z) - STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension [42.57581945778631]
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing.
We propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization.
We show that our STRUDEL dialogue comprehension model can significantly improve the dialogue comprehension performance of transformer encoder language models.
arXiv Detail & Related papers (2022-12-24T04:39:54Z) - Learning-by-Narrating: Narrative Pre-Training for Zero-Shot Dialogue
Comprehension [48.483910831143724]
Comprehending a dialogue requires a model to capture diverse kinds of key information in the utterances.
We develop a novel narrative-guided pre-training strategy that learns by narrating the key information from a dialogue input.
arXiv Detail & Related papers (2022-03-19T05:20:25Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.