A Simple Global Neural Discourse Parser
- URL: http://arxiv.org/abs/2009.01312v2
- Date: Tue, 8 Sep 2020 15:33:34 GMT
- Title: A Simple Global Neural Discourse Parser
- Authors: Yichu Zhou, Omri Koshorek, Vivek Srikumar and Jonathan Berant
- Abstract summary: We propose a simple chart-based neural discourse that does not require any manually-crafted features and is based on learned span representations only.
We empirically demonstrate that our model achieves the best performance among globals, and comparable performance to state-of-art greedys.
- Score: 61.728994693410954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discourse parsing is largely dominated by greedy parsers with
manually-designed features, while global parsing is rare due to its
computational expense. In this paper, we propose a simple chart-based neural
discourse parser that does not require any manually-crafted features and is
based on learned span representations only. To overcome the computational
challenge, we propose an independence assumption between the label assigned to
a node in the tree and the splitting point that separates its children, which
results in tractable decoding. We empirically demonstrate that our model
achieves the best performance among global parsers, and comparable performance
to state-of-art greedy parsers, using only learned span representations.
Related papers
- Contextual Distortion Reveals Constituency: Masked Language Models are
Implicit Parsers [7.558415495951758]
We propose a novel method for extracting parse trees from masked language models (LMs)
Our method computes a score for each span based on the distortion of contextual representations resulting from linguistic perturbations.
Our method consistently outperforms previous state-of-the-art methods on English with masked LMs, and also demonstrates superior performance in a multilingual setting.
arXiv Detail & Related papers (2023-06-01T13:10:48Z) - Laziness Is a Virtue When It Comes to Compositionality in Neural
Semantic Parsing [20.856601758389544]
We introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form's leaves.
We show that our novel, bottom-up parsing semantic technique outperforms general-purpose semantics while also being competitive with comparable neurals.
arXiv Detail & Related papers (2023-05-07T17:53:08Z) - Cascading and Direct Approaches to Unsupervised Constituency Parsing on
Spoken Sentences [67.37544997614646]
We present the first study on unsupervised spoken constituency parsing.
The goal is to determine the spoken sentences' hierarchical syntactic structure in the form of constituency parse trees.
We show that accurate segmentation alone may be sufficient to parse spoken sentences accurately.
arXiv Detail & Related papers (2023-03-15T17:57:22Z) - On The Ingredients of an Effective Zero-shot Semantic Parser [95.01623036661468]
We analyze zero-shot learning by paraphrasing training examples of canonical utterances and programs from a grammar.
We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods.
Our model achieves strong performance on two semantic parsing benchmarks (Scholar, Geo) with zero labeled data.
arXiv Detail & Related papers (2021-10-15T21:41:16Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - Low-Resource Task-Oriented Semantic Parsing via Intrinsic Modeling [65.51280121472146]
We exploit what we intrinsically know about ontology labels to build efficient semantic parsing models.
Our model is highly efficient using a low-resource benchmark derived from TOPv2.
arXiv Detail & Related papers (2021-04-15T04:01:02Z) - Iterative Utterance Segmentation for Neural Semantic Parsing [38.344720207846905]
We present a novel framework for boosting neural semantic domains via iterative utterance segmentation.
One key advantage is that this framework does not require any handcraft utterance or additional labeled data for the segmenter.
On data that require compositional generalization, our framework brings significant accuracy: Geo 63.1 to 81.2, Formulas 59.7 to 72.7, ComplexWebQuestions 27.1 to 56.3.
arXiv Detail & Related papers (2020-12-13T09:46:24Z) - Named Entity Recognition as Dependency Parsing [16.544333689188246]
We use graph-based dependency parsing to provide our model a global view on the input via a biaffine model.
We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points.
arXiv Detail & Related papers (2020-05-14T17:11:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.