Improving Dialogue Discourse Parsing through Discourse-aware Utterance Clarification
- URL: http://arxiv.org/abs/2506.15081v1
- Date: Wed, 18 Jun 2025 02:47:14 GMT
- Title: Improving Dialogue Discourse Parsing through Discourse-aware Utterance Clarification
- Authors: Yaxin Fan, Peifeng Li, Qiaoming Zhu,
- Abstract summary: We propose a discourse-aware Clarification Module (DCM) to enhance the performance of the dialogue discourse.<n>DCM employs two distinct reasoning processes: clarification type reasoning and discourse goal reasoning.<n>CPO enables to assess the contributions of the clarifications from DCM and provide feedback to optimize the DCM.
- Score: 14.879100851573998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dialogue discourse parsing aims to identify and analyze discourse relations between the utterances within dialogues. However, linguistic features in dialogues, such as omission and idiom, frequently introduce ambiguities that obscure the intended discourse relations, posing significant challenges for parsers. To address this issue, we propose a Discourse-aware Clarification Module (DCM) to enhance the performance of the dialogue discourse parser. DCM employs two distinct reasoning processes: clarification type reasoning and discourse goal reasoning. The former analyzes linguistic features, while the latter distinguishes the intended relation from the ambiguous one. Furthermore, we introduce Contribution-aware Preference Optimization (CPO) to mitigate the risk of erroneous clarifications, thereby reducing cascading errors. CPO enables the parser to assess the contributions of the clarifications from DCM and provide feedback to optimize the DCM, enhancing its adaptability and alignment with the parser's requirements. Extensive experiments on the STAC and Molweni datasets demonstrate that our approach effectively resolves ambiguities and significantly outperforms the state-of-the-art (SOTA) baselines.
Related papers
- On Mitigating Data Sparsity in Conversational Recommender Systems [69.70761335240738]
Conversational recommender systems (CRSs) capture user preference through textual information in dialogues.<n>They suffer from data sparsity on two fronts: the dialogue space is vast and linguistically diverse, while the item space exhibits long-tail and sparse distributions.<n>Existing methods struggle with (1) generalizing to varied dialogue expressions due to underutilization of rich textual cues, and (2) learning informative item representations under severe sparsity.
arXiv Detail & Related papers (2025-07-01T06:54:51Z) - Enhancing Dialogue Systems with Discourse-Level Understanding Using Deep Canonical Correlation Analysis [0.0]
We propose a novel framework that integrates Deep Canonical Correlation Analysis for discourse-level understanding.<n>This framework learns discourse tokens to capture relationships between utterances and their surrounding context.<n>Experiments on the Ubuntu Dialogue Corpus demonstrate significant enhancement in response selection.
arXiv Detail & Related papers (2025-04-12T06:19:08Z) - Evaluating Task-Oriented Dialogue Consistency through Constraint Satisfaction [1.4272411349249625]
We propose to conceptualize dialogue consistency as a Constraint Satisfaction Problem (CSP)
We utilize a CSP solver to detect inconsistencies in dialogues re-lexicalized by an LLM.
We argue that CSP captures core properties of dialogue consistency that have been poorly considered by approaches based on component pipelines.
arXiv Detail & Related papers (2024-07-16T15:38:41Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Dialogue Inspectional Summarization with Factual Inconsistency Awareness [34.97845384948336]
We investigate the factual inconsistency problem for Dialogue Inspectional Summarization (DIS) under non-pretraining and pretraining settings.
An innovative end-to-end dialogue summary generation framework is proposed with two auxiliary tasks.
Comprehensive experiments demonstrate that the proposed model can generate a more readable summary with accurate coverage of factual aspects.
arXiv Detail & Related papers (2021-11-05T06:26:22Z) - Improving Multi-Party Dialogue Discourse Parsing via Domain Integration [25.805553277418813]
Multi-party conversations are implicitly organized by semantic level correlations across the interactive turns.
dialogue discourse analysis can be applied to predict the dependency structure and relations between the elementary discourse units.
Existing corpora with dialogue discourse annotation are collected from specific domains with limited sample sizes.
arXiv Detail & Related papers (2021-10-09T09:36:22Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.