CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues
- URL: http://arxiv.org/abs/2105.09914v1
- Date: Thu, 20 May 2021 17:17:26 GMT
- Title: CREAD: Combined Resolution of Ellipses and Anaphora in Dialogues
- Authors: Bo-Hsiang Tseng, Shruti Bhargava, Jiarui Lu, Joel Ruben Antony Moniz,
Dhivya Piraviperumal, Lin Li, Hong Yu
- Abstract summary: Anaphora and ellipses are two common phenomena in dialogues.
Traditionally, anaphora is resolved by coreference resolution and ellipses by query rewrite.
We propose a novel joint learning framework of modeling coreference resolution and query rewriting.
- Score: 14.66729951223073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anaphora and ellipses are two common phenomena in dialogues. Without
resolving referring expressions and information omission, dialogue systems may
fail to generate consistent and coherent responses. Traditionally, anaphora is
resolved by coreference resolution and ellipses by query rewrite. In this work,
we propose a novel joint learning framework of modeling coreference resolution
and query rewriting for complex, multi-turn dialogue understanding. Given an
ongoing dialogue between a user and a dialogue assistant, for the user query,
our joint learning model first predicts coreference links between the query and
the dialogue context, and then generates a self-contained rewritten user query.
To evaluate our model, we annotate a dialogue based coreference resolution
dataset, MuDoCo, with rewritten queries. Results show that the performance of
query rewrite can be substantially boosted (+2.3% F1) with the aid of
coreference modeling. Furthermore, our joint model outperforms the
state-of-the-art coreference resolution model (+2% F1) on this dataset.
Related papers
- Interactive Text-to-Image Retrieval with Large Language Models: A Plug-and-Play Approach [33.231639257323536]
In this paper, we address the issue of dialogue-form context query within the interactive text-to-image retrieval task.
By reformulating the dialogue-form context, we eliminate the necessity of fine-tuning a retrieval model on existing visual dialogue data.
We construct the LLM questioner to generate non-redundant questions about the attributes of the target image.
arXiv Detail & Related papers (2024-06-05T16:09:01Z) - Instructive Dialogue Summarization with Query Aggregations [41.89962538701501]
We introduce instruction-finetuned language models to expand the capability set of dialogue summarization models.
We propose a three-step approach to synthesize high-quality query-based summarization triples.
By training a unified model called InstructDS on three summarization datasets with multi-purpose instructive triples, we expand the capability of dialogue summarization models.
arXiv Detail & Related papers (2023-10-17T04:03:00Z) - Manual-Guided Dialogue for Flexible Conversational Agents [84.46598430403886]
How to build and use dialogue data efficiently, and how to deploy models in different domains at scale can be critical issues in building a task-oriented dialogue system.
We propose a novel manual-guided dialogue scheme, where the agent learns the tasks from both dialogue and manuals.
Our proposed scheme reduces the dependence of dialogue models on fine-grained domain ontology, and makes them more flexible to adapt to various domains.
arXiv Detail & Related papers (2022-08-16T08:21:12Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Matching Questions and Answers in Dialogues from Online Forums [12.64602629459043]
Matching question-answer relations between two turns in conversations is not only the first step in analyzing dialogue structures, but also valuable for training dialogue systems.
This paper presents a QA matching model considering both distance information and dialogue history by two simultaneous attention mechanisms called mutual attention.
arXiv Detail & Related papers (2020-05-19T08:18:52Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z) - Modality-Balanced Models for Visual Dialogue [102.35406085738325]
The Visual Dialog task requires a model to exploit both image and conversational context information to generate the next response to the dialogue.
We show that previous joint-modality (history and image) models over-rely on and are more prone to memorizing the dialogue history.
We present methods for this integration of the two models, via ensemble and consensus dropout fusion with shared parameters.
arXiv Detail & Related papers (2020-01-17T14:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.