Human-like informative conversations: Better acknowledgements using
conditional mutual information
- URL: http://arxiv.org/abs/2104.07831v1
- Date: Fri, 16 Apr 2021 00:13:57 GMT
- Title: Human-like informative conversations: Better acknowledgements using
conditional mutual information
- Authors: Ashwin Paranjape (1), Christopher D. Manning (1) ((1) Stanford
University)
- Abstract summary: This work aims to build a dialogue agent that can weave new factual content into conversations as naturally as humans.
We draw insights from linguistic principles of conversational analysis and annotate human-human conversations from the Switchboard Dialog Act Corpus.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to build a dialogue agent that can weave new factual content
into conversations as naturally as humans. We draw insights from linguistic
principles of conversational analysis and annotate human-human conversations
from the Switchboard Dialog Act Corpus to examine humans strategies for
acknowledgement, transition, detail selection and presentation. When current
chatbots (explicitly provided with new factual content) introduce facts into a
conversation, their generated responses do not acknowledge the prior turns.
This is because models trained with two contexts - new factual content and
conversational history - generate responses that are non-specific w.r.t. one of
the contexts, typically the conversational history. We show that specificity
w.r.t. conversational history is better captured by Pointwise Conditional
Mutual Information ($\text{pcmi}_h$) than by the established use of Pointwise
Mutual Information ($\text{pmi}$). Our proposed method, Fused-PCMI, trades off
$\text{pmi}$ for $\text{pcmi}_h$ and is preferred by humans for overall quality
over the Max-PMI baseline 60% of the time. Human evaluators also judge
responses with higher $\text{pcmi}_h$ better at acknowledgement 74% of the
time. The results demonstrate that systems mimicking human conversational
traits (in this case acknowledgement) improve overall quality and more broadly
illustrate the utility of linguistic principles in improving dialogue agents.
Related papers
- Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Let's Get Personal: Personal Questions Improve SocialBot Performance in
the Alexa Prize [0.0]
There has been an increased focus on creating conversational open-domain dialogue systems in the spoken dialogue community.
Unlike traditional dialogue systems, these conversational systems cannot assume any specific information need or domain restrictions.
We developed a robust open-domain conversational system, Athena, that real Amazon Echo users access and evaluate at scale.
arXiv Detail & Related papers (2023-03-09T00:10:29Z) - PLACES: Prompting Language Models for Social Conversation Synthesis [103.94325597273316]
We use a small set of expert-written conversations as in-context examples to synthesize a social conversation dataset using prompting.
We perform several thorough evaluations of our synthetic conversations compared to human-collected conversations.
arXiv Detail & Related papers (2023-02-07T05:48:16Z) - Knowledge-Grounded Conversational Data Augmentation with Generative
Conversational Networks [76.11480953550013]
We take a step towards automatically generating conversational data using Generative Conversational Networks.
We evaluate our approach on conversations with and without knowledge on the Topical Chat dataset.
arXiv Detail & Related papers (2022-07-22T22:37:14Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - Ditch the Gold Standard: Re-evaluating Conversational Question Answering [9.194536300785481]
We conduct the first large-scale human evaluation of state-of-the-art CQA systems.
We find that the distribution of human-machine conversations differs drastically from that of human-human conversations.
We propose a question rewriting mechanism based on predicted history, which better correlates with human judgments.
arXiv Detail & Related papers (2021-12-16T11:57:56Z) - Know Deeper: Knowledge-Conversation Cyclic Utilization Mechanism for
Open-domain Dialogue Generation [11.72386584395626]
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
Existing dialogue models pay attention to unilaterally incorporating personal knowledge into the dialog while ignoring the fact that incorporating the personality-related conversation information into personal knowledge taken as the bilateral information flow boosts the quality of the subsequent conversation.
We propose a conversation-adaption multi-view persona aware response generation model that aims at enhancing conversation consistency and alleviating the repetition from two folds.
arXiv Detail & Related papers (2021-07-16T08:59:06Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Contextual Dialogue Act Classification for Open-Domain Conversational
Agents [10.576497782941697]
Classifying the general intent of the user utterance in a conversation, also known as Dialogue Act (DA), is a key step in Natural Language Understanding (NLU) for conversational agents.
We propose CDAC (Contextual Dialogue Act), a simple yet effective deep learning approach for contextual dialogue act classification.
We use transfer learning to adapt models trained on human-human conversations to predict dialogue acts in human-machine dialogues.
arXiv Detail & Related papers (2020-05-28T06:48:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.