A Deeper (Autoregressive) Approach to Non-Convergent Discourse Parsing
- URL: http://arxiv.org/abs/2305.12510v1
- Date: Sun, 21 May 2023 17:04:21 GMT
- Title: A Deeper (Autoregressive) Approach to Non-Convergent Discourse Parsing
- Authors: Yoav Tulpan, Oren Tsur
- Abstract summary: We present a unified model for Non-Convergent Discourse Parsing that does not require any additional input other than the previous dialog utterances.
Our model achieves results comparable with SOTA, without using label collocation and without training a unique architecture/model for each label.
- Score: 0.6599344783327052
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online social platforms provide a bustling arena for information-sharing and
for multi-party discussions. Various frameworks for dialogic discourse parsing
were developed and used for the processing of discussions and for predicting
the productivity of a dialogue. However, most of these frameworks are not
suitable for the analysis of contentious discussions that are commonplace in
many online platforms. A novel multi-label scheme for contentious dialog
parsing was recently introduced by Zakharov et al. (2021). While the schema is
well developed, the computational approach they provide is both naive and
inefficient, as a different model (architecture) using a different
representation of the input, is trained for each of the 31 tags in the
annotation scheme. Moreover, all their models assume full knowledge of label
collocations and context, which is unlikely in any realistic setting. In this
work, we present a unified model for Non-Convergent Discourse Parsing that does
not require any additional input other than the previous dialog utterances. We
fine-tuned a RoBERTa backbone, combining embeddings of the utterance, the
context and the labels through GRN layers and an asymmetric loss function.
Overall, our model achieves results comparable with SOTA, without using label
collocation and without training a unique architecture/model for each label.
Related papers
- Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Unsupervised Dialogue Topic Segmentation with Topic-aware Utterance
Representation [51.22712675266523]
Dialogue Topic (DTS) plays an essential role in a variety of dialogue modeling tasks.
We propose a novel unsupervised DTS framework, which learns topic-aware utterance representations from unlabeled dialogue data.
arXiv Detail & Related papers (2023-05-04T11:35:23Z) - DIONYSUS: A Pre-trained Model for Low-Resource Dialogue Summarization [127.714919036388]
DIONYSUS is a pre-trained encoder-decoder model for summarizing dialogues in any new domain.
Our experiments show that DIONYSUS outperforms existing methods on six datasets.
arXiv Detail & Related papers (2022-12-20T06:21:21Z) - SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for
Task-Oriented Dialog Understanding [68.94808536012371]
We propose a tree-structured pre-trained conversation model, which learns dialog representations from limited labeled dialogs and large-scale unlabeled dialog corpora.
Our method can achieve new state-of-the-art results on the DialoGLUE benchmark consisting of seven datasets and four popular dialog understanding tasks.
arXiv Detail & Related papers (2022-09-14T13:42:50Z) - Representation Learning for Conversational Data using Discourse Mutual
Information Maximization [9.017156603976915]
We argue that the structure-unaware word-by-word generation is not suitable for effective conversation modeling.
We propose a structure-aware Mutual Information based loss-function DMI for training dialog-representation models.
Our models show the most promising performance on the dialog evaluation task DailyDialog++, in both random and adversarial negative scenarios.
arXiv Detail & Related papers (2021-12-04T13:17:07Z) - Discourse Parsing of Contentious, Non-Convergent Online Discussions [0.16311150636417257]
Inspired by the Bakhtinian theory of Dialogism, we propose a novel theoretical and computational framework.
We develop a novel discourse annotation schema which reflects a hierarchy of discursive strategies.
We share the first labeled dataset of contentious non-convergent online discussions.
arXiv Detail & Related papers (2020-12-08T17:36:39Z) - DialogBERT: Discourse-Aware Response Generation via Learning to Recover
and Rank Utterances [18.199473005335093]
This paper presents DialogBERT, a novel conversational response generation model that enhances previous PLM-based dialogue models.
To efficiently capture the discourse-level coherence among utterances, we propose two training objectives, including masked utterance regression.
Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines.
arXiv Detail & Related papers (2020-12-03T09:06:23Z) - Variational Hierarchical Dialog Autoencoder for Dialog State Tracking
Data Augmentation [59.174903564894954]
In this work, we extend this approach to the task of dialog state tracking for goal-oriented dialogs.
We propose the Variational Hierarchical Dialog Autoencoder (VHDA) for modeling the complete aspects of goal-oriented dialogs.
Experiments on various dialog datasets show that our model improves the downstream dialog trackers' robustness via generative data augmentation.
arXiv Detail & Related papers (2020-01-23T15:34:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.