Unsupervised Learning of Discourse Structures using a Tree Autoencoder
- URL: http://arxiv.org/abs/2012.09446v1
- Date: Thu, 17 Dec 2020 08:40:34 GMT
- Title: Unsupervised Learning of Discourse Structures using a Tree Autoencoder
- Authors: Patrick Huber and Giuseppe Carenini
- Abstract summary: We propose a new strategy to generate tree structures in a task-agnostic, unsupervised fashion by extending a latent tree induction framework with an auto-encoding objective.
The proposed approach can be applied to any tree objective, such as syntactic parsing, discourse parsing and others.
In this paper we are inferring general tree structures of natural text in multiple domains, showing promising results on a diverse set of tasks.
- Score: 8.005512864082126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discourse information, as postulated by popular discourse theories, such as
RST and PDTB, has been shown to improve an increasing number of downstream NLP
tasks, showing positive effects and synergies of discourse with important
real-world applications. While methods for incorporating discourse become more
and more sophisticated, the growing need for robust and general discourse
structures has not been sufficiently met by current discourse parsers, usually
trained on small scale datasets in a strictly limited number of domains. This
makes the prediction for arbitrary tasks noisy and unreliable. The overall
resulting lack of high-quality, high-quantity discourse trees poses a severe
limitation to further progress. In order the alleviate this shortcoming, we
propose a new strategy to generate tree structures in a task-agnostic,
unsupervised fashion by extending a latent tree induction framework with an
auto-encoding objective. The proposed approach can be applied to any
tree-structured objective, such as syntactic parsing, discourse parsing and
others. However, due to the especially difficult annotation process to generate
discourse trees, we initially develop a method to generate larger and more
diverse discourse treebanks. In this paper we are inferring general tree
structures of natural text in multiple domains, showing promising results on a
diverse set of tasks.
Related papers
- Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Revisiting Conversation Discourse for Dialogue Disentanglement [88.3386821205896]
We propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics.
We develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context.
Our work has great potential to facilitate broader multi-party multi-thread dialogue applications.
arXiv Detail & Related papers (2023-06-06T19:17:47Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - Large Discourse Treebanks from Scalable Distant Supervision [30.615883375573432]
We propose a framework to generate "silver-standard" discourse trees from distant supervision on the auxiliary task of sentiment analysis.
"Silver-standard" discourse trees are trained on larger, more diverse and domain-independent datasets.
arXiv Detail & Related papers (2022-10-18T03:33:43Z) - Unsupervised Inference of Data-Driven Discourse Structures using a Tree
Auto-Encoder [30.615883375573432]
We propose a new strategy to generate tree structures in a task-agnostic, unsupervised fashion by extending a latent tree induction framework with an auto-encoding objective.
The proposed approach can be applied to any tree-structured objective, such as syntactic parsing, discourse parsing and others.
arXiv Detail & Related papers (2022-10-18T03:28:39Z) - Incorporating Constituent Syntax for Coreference Resolution [50.71868417008133]
We propose a graph-based method to incorporate constituent syntactic structures.
We also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees.
Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T07:40:42Z) - Predicting Above-Sentence Discourse Structure using Distant Supervision
from Topic Segmentation [8.688675709130289]
RST-style discourse parsing plays a vital role in many NLP tasks.
Despite its importance, one of the most prevailing limitations in modern day discourse parsing is the lack of large-scale datasets.
arXiv Detail & Related papers (2021-12-12T10:16:45Z) - MEGA RST Discourse Treebanks with Structure and Nuclearity from Scalable
Distant Sentiment Supervision [30.615883375573432]
We present a novel methodology to automatically generate discourse treebanks using distant supervision from sentiment-annotated datasets.
Our approach generates trees incorporating structure and nuclearity for documents of arbitrary length by relying on an efficient beam-search strategy.
Experiments indicate that a discourse trained on our MEGA-DT treebank delivers promising inter-domain performance gains.
arXiv Detail & Related papers (2020-11-05T18:22:38Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Exploiting Syntactic Structure for Better Language Modeling: A Syntactic
Distance Approach [78.77265671634454]
We make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called "syntactic distances"
Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.
arXiv Detail & Related papers (2020-05-12T15:35:00Z) - Discontinuous Constituent Parsing with Pointer Networks [0.34376560669160383]
discontinuous constituent trees are crucial for representing all grammatical phenomena of languages such as German.
Recent advances in dependency parsing have shown that Pointer Networks excel in efficiently parsing syntactic relations between words in a sentence.
We propose a novel neural network architecture that is able to generate the most accurate discontinuous constituent representations.
arXiv Detail & Related papers (2020-02-05T15:12:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.