Topic-Aware Multi-turn Dialogue Modeling
- URL: http://arxiv.org/abs/2009.12539v2
- Date: Thu, 17 Dec 2020 05:46:27 GMT
- Title: Topic-Aware Multi-turn Dialogue Modeling
- Authors: Yi Xu, Hai Zhao, Zhuosheng Zhang
- Abstract summary: This paper presents a novel solution for multi-turn dialogue modeling, which segments and extracts topic-aware utterances in an unsupervised way.
Our topic-aware modeling is implemented by a newly proposed unsupervised topic-aware segmentation algorithm and Topic-Aware Dual-attention Matching (TADAM) Network.
- Score: 91.52820664879432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the retrieval-based multi-turn dialogue modeling, it remains a challenge
to select the most appropriate response according to extracting salient
features in context utterances. As a conversation goes on, topic shift at
discourse-level naturally happens through the continuous multi-turn dialogue
context. However, all known retrieval-based systems are satisfied with
exploiting local topic words for context utterance representation but fail to
capture such essential global topic-aware clues at discourse-level. Instead of
taking topic-agnostic n-gram utterance as processing unit for matching purpose
in existing systems, this paper presents a novel topic-aware solution for
multi-turn dialogue modeling, which segments and extracts topic-aware
utterances in an unsupervised way, so that the resulted model is capable of
capturing salient topic shift at discourse-level in need and thus effectively
track topic flow during multi-turn conversation. Our topic-aware modeling is
implemented by a newly proposed unsupervised topic-aware segmentation algorithm
and Topic-Aware Dual-attention Matching (TADAM) Network, which matches each
topic segment with the response in a dual cross-attention way. Experimental
results on three public datasets show TADAM can outperform the state-of-the-art
method, especially by 3.3% on E-commerce dataset that has an obvious topic
shift.
Related papers
- Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Multi-Granularity Prompts for Topic Shift Detection in Dialogue [13.739991183173494]
The goal of dialogue topic shift detection is to identify whether the current topic in a conversation has changed or needs to change.
Previous work focused on detecting topic shifts using pre-trained models to encode the utterance.
We take a prompt-based approach to fully extract topic information from dialogues at multiple-granularity, i.e., label, turn, and topic.
arXiv Detail & Related papers (2023-05-23T12:35:49Z) - Unsupervised Dialogue Topic Segmentation with Topic-aware Utterance
Representation [51.22712675266523]
Dialogue Topic (DTS) plays an essential role in a variety of dialogue modeling tasks.
We propose a novel unsupervised DTS framework, which learns topic-aware utterance representations from unlabeled dialogue data.
arXiv Detail & Related papers (2023-05-04T11:35:23Z) - Sequential Topic Selection Model with Latent Variable for Topic-Grounded
Dialogue [21.1427816176227]
We propose a novel approach, named Sequential Global Topic Attention (SGTA) to exploit topic transition over all conversations.
Our model outperforms competitive baselines on prediction and generation tasks.
arXiv Detail & Related papers (2022-10-17T07:34:14Z) - Topic-Aware Contrastive Learning for Abstractive Dialogue Summarization [41.75442239197745]
This work proposes two topic-aware contrastive learning objectives, namely coherence detection and sub-summary generation objectives.
Experiments on benchmark datasets demonstrate that the proposed simple method significantly outperforms strong baselines.
arXiv Detail & Related papers (2021-09-10T17:03:25Z) - Response Selection for Multi-Party Conversations with Dynamic Topic
Tracking [63.15158355071206]
We frame response selection as a dynamic topic tracking task to match the topic between the response and relevant conversation context.
We propose a novel multi-task learning framework that supports efficient encoding through large pretrained models.
Experimental results on the DSTC-8 Ubuntu IRC dataset show state-of-the-art results in response selection and topic disentanglement tasks.
arXiv Detail & Related papers (2020-10-15T14:21:38Z) - Multi-View Sequence-to-Sequence Models with Conversational Structure for
Abstractive Dialogue Summarization [72.54873655114844]
Text summarization is one of the most challenging and interesting problems in NLP.
This work proposes a multi-view sequence-to-sequence model by first extracting conversational structures of unstructured daily chats from different views to represent conversations.
Experiments on a large-scale dialogue summarization corpus demonstrated that our methods significantly outperformed previous state-of-the-art models via both automatic evaluations and human judgment.
arXiv Detail & Related papers (2020-10-04T20:12:44Z) - Modeling Topical Relevance for Multi-Turn Dialogue Generation [61.87165077442267]
We propose a new model, named STAR-BTM, to tackle the problem of topic drift in multi-turn dialogue.
The Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context.
Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2020-09-27T03:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.