Dialogue Inspectional Summarization with Factual Inconsistency Awareness
- URL: http://arxiv.org/abs/2111.03284v1
- Date: Fri, 5 Nov 2021 06:26:22 GMT
- Title: Dialogue Inspectional Summarization with Factual Inconsistency Awareness
- Authors: Leilei Gan, Yating Zhang, Kun Kuang, Lin Yuan, Shuo Li, Changlong Sun,
Xiaozhong Liu, Fei Wu
- Abstract summary: We investigate the factual inconsistency problem for Dialogue Inspectional Summarization (DIS) under non-pretraining and pretraining settings.
An innovative end-to-end dialogue summary generation framework is proposed with two auxiliary tasks.
Comprehensive experiments demonstrate that the proposed model can generate a more readable summary with accurate coverage of factual aspects.
- Score: 34.97845384948336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue summarization has been extensively studied and applied, where the
prior works mainly focused on exploring superior model structures to align the
input dialogue and the output summary. However, for professional dialogues
(e.g., legal debate and medical diagnosis), semantic/statistical alignment can
hardly fill the logical/factual gap between input dialogue discourse and
summary output with external knowledge. In this paper, we mainly investigate
the factual inconsistency problem for Dialogue Inspectional Summarization (DIS)
under non-pretraining and pretraining settings. An innovative end-to-end
dialogue summary generation framework is proposed with two auxiliary tasks:
Expectant Factual Aspect Regularization (EFAR) and Missing Factual Entity
Discrimination (MFED). Comprehensive experiments demonstrate that the proposed
model can generate a more readable summary with accurate coverage of factual
aspects as well as informing the user with potential missing facts detected
from the input dialogue for further human intervention.
Related papers
- STRUDEL: Structured Dialogue Summarization for Dialogue Comprehension [42.57581945778631]
Abstractive dialogue summarization has long been viewed as an important standalone task in natural language processing.
We propose a novel type of dialogue summarization task - STRUctured DiaLoguE Summarization.
We show that our STRUDEL dialogue comprehension model can significantly improve the dialogue comprehension performance of transformer encoder language models.
arXiv Detail & Related papers (2022-12-24T04:39:54Z) - Analyzing and Evaluating Faithfulness in Dialogue Summarization [67.07947198421421]
We first perform the fine-grained human analysis on the faithfulness of dialogue summaries and observe that over 35% of generated summaries are faithfully inconsistent respective the source dialogues.
We present a new model-level faithfulness evaluation method. It examines generation models with multi-choice questions created by rule-based transformations.
arXiv Detail & Related papers (2022-10-21T07:22:43Z) - Taxonomy of Abstractive Dialogue Summarization: Scenarios, Approaches
and Future Directions [14.85592662663867]
This survey provides a comprehensive investigation on existing work for abstractive dialogue summarization from scenarios.
It categorizes the task into two broad categories according to the type of input dialogues, i.e., open-domain and task-oriented.
It presents a taxonomy of existing techniques in three directions, namely, injecting dialogue features, designing auxiliary training tasks and using additional data.
arXiv Detail & Related papers (2022-10-18T14:33:03Z) - Enhancing Semantic Understanding with Self-supervised Methods for
Abstractive Dialogue Summarization [4.226093500082746]
We introduce self-supervised methods to compensate shortcomings to train a dialogue summarization model.
Our principle is to detect incoherent information flows using pretext dialogue text to enhance BERT's ability to contextualize the dialogue text representations.
arXiv Detail & Related papers (2022-09-01T07:51:46Z) - Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and
Benchmarks [95.29345070102045]
In this paper, we focus our investigation on social bias detection of dialog safety problems.
We first propose a novel Dial-Bias Frame for analyzing the social bias in conversations pragmatically.
We introduce CDail-Bias dataset that is the first well-annotated Chinese social bias dialog dataset.
arXiv Detail & Related papers (2022-02-16T11:59:29Z) - TODSum: Task-Oriented Dialogue Summarization with State Tracking [16.87549093925514]
We introduce a large-scale public Task-Oriented Dialogue Summarization dataset, TODSum.
Compared to existing work, TODSum suffers from severe scattered information issues and requires strict factual consistency.
We propose a state-aware structured dialogue summarization model to integrate dialogue state information and dialogue history.
arXiv Detail & Related papers (2021-10-25T06:53:11Z) - Structural Pre-training for Dialogue Comprehension [51.215629336320305]
We present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.
To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives.
Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.
arXiv Detail & Related papers (2021-05-23T15:16:54Z) - I like fish, especially dolphins: Addressing Contradictions in Dialogue
Modeling [104.09033240889106]
We introduce the DialoguE COntradiction DEtection task (DECODE) and a new conversational dataset containing both human-human and human-bot contradictory dialogues.
We then compare a structured utterance-based approach of using pre-trained Transformer models for contradiction detection with the typical unstructured approach.
arXiv Detail & Related papers (2020-12-24T18:47:49Z) - Fact-based Dialogue Generation with Convergent and Divergent Decoding [2.28438857884398]
This paper proposes an end-to-end fact-based dialogue system augmented with the ability of convergent and divergent thinking.
Our model incorporates a novel convergent and divergent decoding that can generate informative and diverse responses.
arXiv Detail & Related papers (2020-05-06T23:49:35Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z) - Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue
Representation Learning [50.5572111079898]
Multi-role dialogue understanding comprises a wide range of diverse tasks such as question answering, act classification, dialogue summarization etc.
While dialogue corpora are abundantly available, labeled data, for specific learning tasks, can be highly scarce and expensive.
In this work, we investigate dialogue context representation learning with various types unsupervised pretraining tasks.
arXiv Detail & Related papers (2020-02-27T04:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.