Structured Dialogue Discourse Parsing
- URL: http://arxiv.org/abs/2306.15103v1
- Date: Mon, 26 Jun 2023 22:51:01 GMT
- Title: Structured Dialogue Discourse Parsing
- Authors: Ta-Chung Chi and Alexander I. Rudnicky
- Abstract summary: discourse parsing aims to uncover the internal structure of a multi-participant conversation.
We propose a principled method that improves upon previous work from two perspectives: encoding and decoding.
Experiments show that our method achieves new state-of-the-art, surpassing the previous model by 2.3 on STAC and 1.5 on Molweni.
- Score: 79.37200787463917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue discourse parsing aims to uncover the internal structure of a
multi-participant conversation by finding all the discourse~\emph{links} and
corresponding~\emph{relations}. Previous work either treats this task as a
series of independent multiple-choice problems, in which the link existence and
relations are decoded separately, or the encoding is restricted to only local
interaction, ignoring the holistic structural information. In contrast, we
propose a principled method that improves upon previous work from two
perspectives: encoding and decoding. From the encoding side, we perform
structured encoding on the adjacency matrix followed by the matrix-tree
learning algorithm, where all discourse links and relations in the dialogue are
jointly optimized based on latent tree-level distribution. From the decoding
side, we perform structured inference using the modified Chiu-Liu-Edmonds
algorithm, which explicitly generates the labeled multi-root non-projective
spanning tree that best captures the discourse structure. In addition, unlike
in previous work, we do not rely on hand-crafted features; this improves the
model's robustness. Experiments show that our method achieves new
state-of-the-art, surpassing the previous model by 2.3 on STAC and 1.5 on
Molweni (F1 scores). \footnote{Code released
at~\url{https://github.com/chijames/structured_dialogue_discourse_parsing}.}
Related papers
- Contextual Document Embeddings [77.22328616983417]
We propose two complementary methods for contextualized document embeddings.
First, an alternative contrastive learning objective that explicitly incorporates the document neighbors into the intra-batch contextual loss.
Second, a new contextual architecture that explicitly encodes neighbor document information into the encoded representation.
arXiv Detail & Related papers (2024-10-03T14:33:34Z) - High-order Joint Constituency and Dependency Parsing [15.697429723696011]
We revisit the topic of jointly parsing constituency and dependency trees, i.e., to produce compatible constituency and dependency trees simultaneously for input sentences.
We conduct experiments and analysis on seven languages, covering both rich-resource and low-resource scenarios.
arXiv Detail & Related papers (2023-09-21T08:45:41Z) - DiscoPrompt: Path Prediction Prompt Tuning for Implicit Discourse
Relation Recognition [27.977742959064916]
We propose a prompt-based path prediction method to utilize the interactive information and intrinsic senses among the hierarchy in IDRR.
This is the first work that injects such structure information into pre-trained language models via prompt tuning.
arXiv Detail & Related papers (2023-05-06T08:16:07Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - Correspondence Matters for Video Referring Expression Comprehension [64.60046797561455]
Video Referring Expression (REC) aims to localize the referent objects described in the sentence to visual regions in the video frames.
Existing methods suffer from two problems: 1) inconsistent localization results across video frames; 2) confusion between the referent and contextual objects.
We propose a novel Dual Correspondence Network (dubbed as DCNet) which explicitly enhances the dense associations in both the inter-frame and cross-modal manners.
arXiv Detail & Related papers (2022-07-21T10:31:39Z) - Incorporating Constituent Syntax for Coreference Resolution [50.71868417008133]
We propose a graph-based method to incorporate constituent syntactic structures.
We also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees.
Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T07:40:42Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - R2D2: Recursive Transformer based on Differentiable Tree for
Interpretable Hierarchical Language Modeling [36.61173494449218]
This paper proposes a model based on differentiable CKY style binary trees to emulate the composition process.
We extend the bidirectional language model pre-training objective to this architecture, attempting to predict each word given its left and right abstraction nodes.
To scale up our approach, we also introduce an efficient pruned tree induction algorithm to enable encoding in just a linear number of composition steps.
arXiv Detail & Related papers (2021-07-02T11:00:46Z) - RST Parsing from Scratch [14.548146390081778]
We introduce a novel end-to-end formulation of document-level discourse parsing in the Rhetorical Structure Theory (RST) framework.
Our framework facilitates discourse parsing from scratch without requiring discourse segmentation as a prerequisite.
Our unified parsing model adopts a beam search to decode the best tree structure by searching through a space of high-scoring trees.
arXiv Detail & Related papers (2021-05-23T06:19:38Z) - Transformer-Based Neural Text Generation with Syntactic Guidance [0.0]
We study the problem of using (partial) constituency parse trees as syntactic guidance for controlled text generation.
Our method first expands a partial template parse tree to a full-fledged parse tree tailored for the input source text.
Our experiments in the controlled paraphrasing task show that our method outperforms SOTA models both semantically and syntactically.
arXiv Detail & Related papers (2020-10-05T01:33:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.