Continuity of Topic, Interaction, and Query: Learning to Quote in Online
Conversations
- URL: http://arxiv.org/abs/2106.09896v1
- Date: Fri, 18 Jun 2021 03:38:48 GMT
- Title: Continuity of Topic, Interaction, and Query: Learning to Quote in Online
Conversations
- Authors: Lingzhi Wang, Jing Li, Xingshan Zeng, Haisong Zhang, Kam-Fai Wong
- Abstract summary: This work studies automatic quotation generation in an online conversation.
An encoder-decoder neural framework is employed to continue the context with a quotation.
Experiment results on two large-scale datasets in English and Chinese.
- Score: 23.214585012203084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quotations are crucial for successful explanations and persuasions in
interpersonal communications. However, finding what to quote in a conversation
is challenging for both humans and machines. This work studies automatic
quotation generation in an online conversation and explores how language
consistency affects whether a quotation fits the given context. Here, we
capture the contextual consistency of a quotation in terms of latent topics,
interactions with the dialogue history, and coherence to the query turn's
existing content. Further, an encoder-decoder neural framework is employed to
continue the context with a quotation via language generation. Experiment
results on two large-scale datasets in English and Chinese demonstrate that our
quotation generation model outperforms the state-of-the-art models. Further
analysis shows that topic, interaction, and query consistency are all helpful
to learn how to quote in online conversations.
Related papers
- Token Trails: Navigating Contextual Depths in Conversational AI with ChatLLM [0.5743699972363359]
Token Trails is a novel approach that leverages token-type embeddings to navigate the contextual nuances within conversations.
Our framework utilizes token-type embeddings to distinguish between user utterances and bot responses, facilitating the generation of context-aware replies.
arXiv Detail & Related papers (2024-04-03T02:11:39Z) - Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Neural Conversation Models and How to Rein Them in: A Survey of Failures
and Fixes [17.489075240435348]
Recent conditional language models are able to continue any kind of text source in an often seemingly fluent way.
From a linguistic perspective, contributing to a conversation is high.
Recent approaches try to tame the underlying language models at various intervention points.
arXiv Detail & Related papers (2023-08-11T12:07:45Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue
Model [79.64376762489164]
PK-Chat is a Pointer network guided generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs.
The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge.
Based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences.
arXiv Detail & Related papers (2023-04-02T18:23:13Z) - FCTalker: Fine and Coarse Grained Context Modeling for Expressive
Conversational Speech Synthesis [75.74906149219817]
Conversational Text-to-Speech (TTS) aims to synthesis an utterance with the right linguistic and affective prosody in a conversational context.
We propose a novel expressive conversational TTS model, as termed FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation.
arXiv Detail & Related papers (2022-10-27T12:20:20Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - Online Conversation Disentanglement with Pointer Networks [13.063606578730449]
We propose an end-to-end online framework for conversation disentanglement.
We design a novel way to embed the whole utterance that comprises timestamp, speaker, and message text.
Our experiments on the Ubuntu IRC dataset show that our method achieves state-of-the-art performance in both link and conversation prediction tasks.
arXiv Detail & Related papers (2020-10-21T15:43:07Z) - Fact-based Dialogue Generation with Convergent and Divergent Decoding [2.28438857884398]
This paper proposes an end-to-end fact-based dialogue system augmented with the ability of convergent and divergent thinking.
Our model incorporates a novel convergent and divergent decoding that can generate informative and diverse responses.
arXiv Detail & Related papers (2020-05-06T23:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.