Like hiking? You probably enjoy nature: Persona-grounded Dialog with
Commonsense Expansions
- URL: http://arxiv.org/abs/2010.03205v1
- Date: Wed, 7 Oct 2020 06:25:39 GMT
- Title: Like hiking? You probably enjoy nature: Persona-grounded Dialog with
Commonsense Expansions
- Authors: Bodhisattwa Prasad Majumder, Harsh Jhamtani, Taylor Berg-Kirkpatrick,
Julian McAuley
- Abstract summary: Existing persona-grounded dialog models often fail to capture simple implications of given persona descriptions.
We propose to expand available persona sentences using existing commonsense knowledge bases and paraphrasing resources.
We also introduce fine-grained grounding on personas by encouraging the model to make a discrete choice among persona sentences.
- Score: 37.15893335147598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing persona-grounded dialog models often fail to capture simple
implications of given persona descriptions, something which humans are able to
do seamlessly. For example, state-of-the-art models cannot infer that interest
in hiking might imply love for nature or longing for a break. In this paper, we
propose to expand available persona sentences using existing commonsense
knowledge bases and paraphrasing resources to imbue dialog models with access
to an expanded and richer set of persona descriptions. Additionally, we
introduce fine-grained grounding on personas by encouraging the model to make a
discrete choice among persona sentences while synthesizing a dialog response.
Since such a choice is not observed in the data, we model it using a discrete
latent random variable and use variational learning to sample from hundreds of
persona expansions. Our model outperforms competitive baselines on the
PersonaChat dataset in terms of dialog quality and diversity while achieving
persona-consistent and controllable dialog generation.
Related papers
- Using Natural Language Inference to Improve Persona Extraction from
Dialogue in a New Domain [44.05974724495336]
We introduce a natural language inference method for adapting a trained persona extraction model to a new setting.
Our method returns higher-quality extracted persona and requires less human annotation.
arXiv Detail & Related papers (2024-01-12T18:25:03Z) - Human Learning by Model Feedback: The Dynamics of Iterative Prompting
with Midjourney [28.39697076030535]
This paper analyzes the dynamics of the user prompts along such iterations.
We show that prompts predictably converge toward specific traits along these iterations.
The possibility that users adapt to the model's preference raises concerns about reusing user data for further training.
arXiv Detail & Related papers (2023-11-20T19:28:52Z) - Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems [103.416202777731]
We study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt.
We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement.
arXiv Detail & Related papers (2023-10-08T21:03:18Z) - Enhancing Personalized Dialogue Generation with Contrastive Latent
Variables: Combining Sparse and Dense Persona [16.90863217077699]
Existing personalized dialogue agents model persona profiles from three resources: sparse or dense persona descriptions and dialogue histories.
We combine the advantages of the three resources to obtain a richer and more accurate persona.
Experimental results on Chinese and English datasets demonstrate our model's superiority in personalization.
arXiv Detail & Related papers (2023-05-19T07:24:27Z) - A Model-Agnostic Data Manipulation Method for Persona-based Dialogue
Generation [107.82729587882397]
It is expensive to scale up current persona-based dialogue datasets.
Each data sample in this task is more complex to learn with than conventional dialogue data.
We propose a data manipulation method, which is model-agnostic to be packed with any persona-based dialogue generation model.
arXiv Detail & Related papers (2022-04-21T03:49:54Z) - DLVGen: A Dual Latent Variable Approach to Personalized Dialogue
Generation [28.721411816698563]
We propose a Dual Latent Variable Generator (DLVGen) capable of generating personalized dialogue.
Unlike prior work, DLVGen models the latent distribution over potential responses as well as the latent distribution over the agent's potential persona.
Empirical results show that DLVGen is capable of generating diverse responses which accurately incorporate the agent's persona.
arXiv Detail & Related papers (2021-11-22T17:21:21Z) - Unsupervised Enrichment of Persona-grounded Dialog with Background
Stories [27.52543925693796]
We equip dialog models with 'background stories' related to a persona by leveraging fictional narratives from existing story datasets.
We perform an unsupervised adaptation of a retrieved story for generating a dialog response using a gradient-based rewriting technique.
Our method can generate responses that are more diverse, and are rated more engaging and human-like by human evaluators.
arXiv Detail & Related papers (2021-06-15T18:20:27Z) - Revealing Persona Biases in Dialogue Systems [64.96908171646808]
We present the first large-scale study on persona biases in dialogue systems.
We conduct analyses on personas of different social classes, sexual orientations, races, and genders.
In our studies of the Blender and DialoGPT dialogue systems, we show that the choice of personas can affect the degree of harms in generated responses.
arXiv Detail & Related papers (2021-04-18T05:44:41Z) - The Adapter-Bot: All-In-One Controllable Conversational Model [66.48164003532484]
We propose a dialogue model that uses a fixed backbone model such as DialGPT and triggers on-demand dialogue skills via different adapters.
Depending on the skills, the model is able to process multiple knowledge types, such as text, tables, and emphatic responses.
We evaluate our model using automatic evaluation by comparing it with existing state-of-the-art conversational models.
arXiv Detail & Related papers (2020-08-28T10:59:31Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.