Fast semantic parsing with well-typedness guarantees
- URL: http://arxiv.org/abs/2009.07365v2
- Date: Tue, 6 Oct 2020 14:49:04 GMT
- Title: Fast semantic parsing with well-typedness guarantees
- Authors: Matthias Lindemann, Jonas Groschwitz, Alexander Koller
- Abstract summary: AM dependency parsing is a principled method for neural semantic parsing with high accuracy across multiple graphbanks.
We describe an A* and a transition-based for AM dependency parsing which guarantee well-typedness and improve parsing speed by up to 3 orders of magnitude.
- Score: 78.76675218975768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AM dependency parsing is a linguistically principled method for neural
semantic parsing with high accuracy across multiple graphbanks. It relies on a
type system that models semantic valency but makes existing parsers slow. We
describe an A* parser and a transition-based parser for AM dependency parsing
which guarantee well-typedness and improve parsing speed by up to 3 orders of
magnitude, while maintaining or improving accuracy.
Related papers
- Hexatagging: Projective Dependency Parsing as Tagging [63.5392760743851]
We introduce a novel dependency, the hexatagger, that constructs dependency trees by tagging the words in a sentence with elements from a finite set of possible tags.
Our approach is fully parallelizable at training time, i.e., the structure-building actions needed to build a dependency parse can be predicted in parallel to each other.
We achieve state-of-the-art performance of 96.4 LAS and 97.4 UAS on the Penn Treebank test set.
arXiv Detail & Related papers (2023-06-08T18:02:07Z) - PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase
Generation [61.05254852400895]
Parse-Instructed Prefix (PIP) is a novel adaptation of prefix-tuning to tune large pre-trained language models.
In contrast to traditional fine-tuning methods for this task, PIP is a compute-efficient alternative with 10 times less learnable parameters.
arXiv Detail & Related papers (2023-05-26T07:42:38Z) - TreePiece: Faster Semantic Parsing via Tree Tokenization [2.1685554819849613]
TreePiece tokenizes a parse tree into subtrees and generates one subtree per decoding step.
On TopV2 benchmark, TreePiece shows 4.6 times faster decoding speed than standard AR.
arXiv Detail & Related papers (2023-03-30T05:44:44Z) - Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing
Models [5.893781742558463]
We examine two promising techniques, prefix tuning and bias-term tuning, specifically on semantic parsing.
We compare them against each other on two different semantic parsing datasets, and we also compare them against full and partial fine-tuning, both in few-shot and conventional data settings.
While prefix tuning is shown to do poorly for semantic parsing tasks off the shelf, we modify it by adding special token embeddings, which results in very strong performance without compromising parameter savings.
arXiv Detail & Related papers (2022-03-05T04:30:03Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021 [0.38073142980733]
We evaluate three leading dependency systems from different paradigms on a small yet diverse subset languages.
As we are interested in efficiency, we evaluate cores without pretrained language models.
Biaffine parsing emerges as a well-balanced default choice.
arXiv Detail & Related papers (2021-06-08T09:55:47Z) - Dependency Parsing with Bottom-up Hierarchical Pointer Networks [0.7412445894287709]
Left-to-right and top-down transition-based algorithms are among the most accurate approaches for performing dependency parsing.
We propose two novel transition-based alternatives: an approach that parses a sentence in right-to-left order and a variant that does it from the outside in.
We empirically test the proposed neural architecture with the different algorithms on a wide variety of languages, outperforming the original approach in practically all of them.
arXiv Detail & Related papers (2021-05-20T09:10:42Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based
Decoding [10.002379593718471]
A successful parse transforms an input utterance to an action that is easily understood by the system.
For complex parsing tasks, the state-of-the-art method is based on autoregressive sequence to sequence models to generate the parse directly.
arXiv Detail & Related papers (2020-10-08T01:18:42Z) - A Simple Global Neural Discourse Parser [61.728994693410954]
We propose a simple chart-based neural discourse that does not require any manually-crafted features and is based on learned span representations only.
We empirically demonstrate that our model achieves the best performance among globals, and comparable performance to state-of-art greedys.
arXiv Detail & Related papers (2020-09-02T19:28:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.