DialBERT: A Hierarchical Pre-Trained Model for Conversation
Disentanglement
- URL: http://arxiv.org/abs/2004.03760v2
- Date: Mon, 13 Sep 2021 03:00:27 GMT
- Title: DialBERT: A Hierarchical Pre-Trained Model for Conversation
Disentanglement
- Authors: Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming
Su, Si Wei
- Abstract summary: We propose a new model, named Dialogue BERT (DialBERT), which integrates local and global semantics in a single stream of messages to disentangle the conversations that mixed together.
We employ BERT to capture the matching information in each utterance pair at the utterance-level, and use a BiLSTM to aggregate and incorporate the context-level information.
With only a 3% increase in parameters, a 12% improvement has been attained in comparison to BERT, based on the F1-Score.
- Score: 47.403802900555576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentanglement is a problem in which multiple conversations occur in the
same channel simultaneously, and the listener should decide which utterance is
part of the conversation he will respond to. We propose a new model, named
Dialogue BERT (DialBERT), which integrates local and global semantics in a
single stream of messages to disentangle the conversations that mixed together.
We employ BERT to capture the matching information in each utterance pair at
the utterance-level, and use a BiLSTM to aggregate and incorporate the
context-level information. With only a 3% increase in parameters, a 12%
improvement has been attained in comparison to BERT, based on the F1-Score. The
model achieves a state-of-the-art result on the a new dataset proposed by IBM
and surpasses previous work by a substantial margin.
Related papers
- SSP: Self-Supervised Post-training for Conversational Search [63.28684982954115]
We propose fullmodel (model) which is a new post-training paradigm with three self-supervised tasks to efficiently initialize the conversational search model.
To verify the effectiveness of our proposed method, we apply the conversational encoder post-trained by model on the conversational search task using two benchmark datasets: CAsT-19 and CAsT-20.
arXiv Detail & Related papers (2023-07-02T13:36:36Z) - Adapted Multimodal BERT with Layer-wise Fusion for Sentiment Analysis [84.12658971655253]
We propose Adapted Multimodal BERT, a BERT-based architecture for multimodal tasks.
adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations.
In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise.
arXiv Detail & Related papers (2022-12-01T17:31:42Z) - CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation [75.60156479374416]
CGoDial is a new challenging and comprehensive Chinese benchmark for Goal-oriented Dialog evaluation.
It contains 96,763 dialog sessions and 574,949 dialog turns totally, covering three datasets with different knowledge sources.
To bridge the gap between academic benchmarks and spoken dialog scenarios, we either collect data from real conversations or add spoken features to existing datasets via crowd-sourcing.
arXiv Detail & Related papers (2022-11-21T16:21:41Z) - Dial2vec: Self-Guided Contrastive Learning of Unsupervised Dialogue
Embeddings [41.79937481022846]
We introduce the task of learning unsupervised dialogue embeddings.
Trivial approaches such as combining pre-trained word or sentence embeddings and encoding through pre-trained language models have been shown to be feasible.
We propose a self-guided contrastive learning approach named dial2vec.
arXiv Detail & Related papers (2022-10-27T11:14:06Z) - Findings on Conversation Disentanglement [28.874162427052905]
We build a learning model that learns utterance-to-utterance and utterance-to-thread classification.
Experiments on the Ubuntu IRC dataset show that this approach has the potential to outperform the conventional greedy approach.
arXiv Detail & Related papers (2021-12-10T05:54:48Z) - Emotion Dynamics Modeling via BERT [7.3785751096660555]
We develop a series of BERT-based models to capture the inter-interlocutor and intra-interlocutor dependencies of the conversational emotion dynamics.
Our proposed models can attain around 5% and 10% improvement over the state-of-the-art baselines, respectively.
arXiv Detail & Related papers (2021-04-15T05:58:48Z) - DialogBERT: Discourse-Aware Response Generation via Learning to Recover
and Rank Utterances [18.199473005335093]
This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models.
To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression.
Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines.
arXiv Detail & Related papers (2020-12-03T09:06:23Z) - Modeling Topical Relevance for Multi-Turn Dialogue Generation [61.87165077442267]
We propose a new model, named STAR-BTM, to tackle the problem of topic drift in multi-turn dialogue.
The Biterm Topic Model is pre-trained on the whole training dataset. Then, the topic level attention weights are computed based on the topic representation of each context.
Experimental results on both Chinese customer services data and English Ubuntu dialogue data show that STAR-BTM significantly outperforms several state-of-the-art methods.
arXiv Detail & Related papers (2020-09-27T03:33:22Z) - BERT-based Ensembles for Modeling Disclosure and Support in
Conversational Social Media Text [9.475039534437332]
We introduce a predictive ensemble model exploiting the finetuned contextualized word embeddings, RoBERTa and ALBERT.
We show that our model outperforms the base models in all considered metrics, achieving an improvement of $3%$ in the F1 score.
arXiv Detail & Related papers (2020-06-01T19:52:01Z) - TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented
Dialogue [113.45485470103762]
In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling.
To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling.
arXiv Detail & Related papers (2020-04-15T04:09:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.