A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021
- URL: http://arxiv.org/abs/2106.04216v2
- Date: Wed, 9 Jun 2021 09:48:21 GMT
- Title: A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021
- Authors: Mark Anderson and Carlos G\'omez Rodr\'iguez
- Abstract summary: We evaluate three leading dependency systems from different paradigms on a small yet diverse subset languages.
As we are interested in efficiency, we evaluate cores without pretrained language models.
Biaffine parsing emerges as a well-balanced default choice.
- Score: 0.38073142980733
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We evaluate three leading dependency parser systems from different paradigms
on a small yet diverse subset of languages in terms of their
accuracy-efficiency Pareto front. As we are interested in efficiency, we
evaluate core parsers without pretrained language models (as these are
typically huge networks and would constitute most of the compute time) or other
augmentations that can be transversally applied to any of them. Biaffine
parsing emerges as a well-balanced default choice, with sequence-labelling
parsing being preferable if inference speed (but not training energy cost) is
the priority.
Related papers
- ChatGPT is a Potential Zero-Shot Dependency Parser [5.726114645714751]
It remains an understudied question whether pre-trained language models can spontaneously exhibit the ability of dependency parsing without introducing additional structure in the zero-shot scenario.
In this paper, we propose to explore the dependency parsing ability of large language models such as ChatGPT and conduct linguistic analysis.
arXiv Detail & Related papers (2023-10-25T14:08:39Z) - Optimal Transport Posterior Alignment for Cross-lingual Semantic Parsing [68.47787275021567]
Cross-lingual semantic parsing transfers parsing capability from a high-resource language (e.g., English) to low-resource languages with scarce training data.
We propose a new approach to cross-lingual semantic parsing by explicitly minimizing cross-lingual divergence between latent variables using Optimal Transport.
arXiv Detail & Related papers (2023-07-09T04:52:31Z) - A Simple and Strong Baseline for End-to-End Neural RST-style Discourse
Parsing [44.72809363746258]
This paper explores a strong baseline by integrating existing simple parsing strategies, top-down and bottom-up, with various transformer-based pre-trained language models.
The experimental results obtained from two benchmark datasets demonstrate that the parsing performance relies on the pretrained language models rather than the parsing strategies.
arXiv Detail & Related papers (2022-10-15T18:38:08Z) - Compositional Generalization in Dependency Parsing [15.953482168182003]
Dependency, however, lacks a compositional parsing benchmark.
We find that increasing compound divergence degrades dependency performance, although not as dramatically as semantic parsing performance.
We identify a number of syntactic structures that drive the dependency's lower performance on the most challenging splits.
arXiv Detail & Related papers (2021-10-13T16:32:24Z) - Distributionally Robust Multilingual Machine Translation [94.51866646879337]
We propose a new learning objective for Multilingual neural machine translation (MNMT) based on distributionally robust optimization.
We show how to practically optimize this objective for large translation corpora using an iterated best response scheme.
Our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.
arXiv Detail & Related papers (2021-09-09T03:48:35Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z) - Fast semantic parsing with well-typedness guarantees [78.76675218975768]
AM dependency parsing is a principled method for neural semantic parsing with high accuracy across multiple graphbanks.
We describe an A* and a transition-based for AM dependency parsing which guarantee well-typedness and improve parsing speed by up to 3 orders of magnitude.
arXiv Detail & Related papers (2020-09-15T21:54:01Z) - Modeling Voting for System Combination in Machine Translation [92.09572642019145]
We propose an approach to modeling voting for system combination in machine translation.
Our approach combines the advantages of statistical and neural methods since it can not only analyze the relations between hypotheses but also allow for end-to-end training.
arXiv Detail & Related papers (2020-07-14T09:59:38Z) - Towards Instance-Level Parser Selection for Cross-Lingual Transfer of
Dependency Parsers [59.345145623931636]
We argue for a novel cross-lingual transfer paradigm: instance-level selection (ILPS)
We present a proof-of-concept study focused on instance-level selection in the framework of delexicalized transfer.
arXiv Detail & Related papers (2020-04-16T13:18:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.