A Truly Joint Neural Architecture for Segmentation and Parsing
- URL: http://arxiv.org/abs/2402.02564v2
- Date: Sat, 2 Mar 2024 16:33:32 GMT
- Title: A Truly Joint Neural Architecture for Segmentation and Parsing
- Authors: Danit Yshaayahu Levi and Reut Tsarfaty
- Abstract summary: Performance of Morphologically Rich Languages (MRLs) is lower than other languages.
Due to high morphological complexity and ambiguity of the space-delimited input tokens, the linguistic units that act as nodes in the tree are not known in advance.
We introduce a joint neural architecture where a lattice-based representation preserving all morphological ambiguity of the input is provided to an arc-factored model, which then solves the morphological and syntactic parsing tasks at once.
- Score: 15.866519123942457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contemporary multilingual dependency parsers can parse a diverse set of
languages, but for Morphologically Rich Languages (MRLs), performance is
attested to be lower than other languages. The key challenge is that, due to
high morphological complexity and ambiguity of the space-delimited input
tokens, the linguistic units that act as nodes in the tree are not known in
advance. Pre-neural dependency parsers for MRLs subscribed to the joint
morpho-syntactic hypothesis, stating that morphological segmentation and
syntactic parsing should be solved jointly, rather than as a pipeline where
segmentation precedes parsing. However, neural state-of-the-art parsers to date
use a strict pipeline. In this paper we introduce a joint neural architecture
where a lattice-based representation preserving all morphological ambiguity of
the input is provided to an arc-factored model, which then solves the
morphological segmentation and syntactic parsing tasks at once. Our experiments
on Hebrew, a rich and highly ambiguous MRL, demonstrate state-of-the-art
performance on parsing, tagging and segmentation of the Hebrew section of UD,
using a single model. This proposed architecture is LLM-based and language
agnostic, providing a solid foundation for MRLs to obtain further performance
improvements and bridge the gap with other languages.
Related papers
- Training Neural Networks as Recognizers of Formal Languages [87.06906286950438]
Formal language theory pertains specifically to recognizers.
It is common to instead use proxy tasks that are similar in only an informal sense.
We correct this mismatch by training and evaluating neural networks directly as binary classifiers of strings.
arXiv Detail & Related papers (2024-11-11T16:33:25Z) - MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization [81.83460411131931]
In multilingual settings, non-Latin scripts and low-resource languages are usually disadvantaged in terms of language models' utility, efficiency, and cost.
We propose multilingual adaptive gradient-based tokenization to reduce over-segmentation via adaptive gradient-based subword tokenization.
arXiv Detail & Related papers (2024-07-11T18:59:21Z) - MRL Parsing Without Tears: The Case of Hebrew [14.104766026682384]
In morphologically rich languages (MRLs), wheres need to identify multiple lexical units in each token, existing systems suffer in latency and setup complexity.
We present a new "flipped pipeline": decisions are made directly on the whole-token units by expert classifiers, each one dedicated to one specific task.
This blazingly fast approach sets a new SOTA in Hebrew POS tagging and dependency parsing, while also reaching near-SOTA performance on other Hebrew tasks.
arXiv Detail & Related papers (2024-03-11T17:54:33Z) - On Robustness of Prompt-based Semantic Parsing with Large Pre-trained
Language Model: An Empirical Study on Codex [48.588772371355816]
This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, codex.
Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples.
arXiv Detail & Related papers (2023-01-30T13:21:00Z) - Demystifying Neural Language Models' Insensitivity to Word-Order [7.72780997900827]
We investigate the insensitivity of natural language models to word-order by quantifying perturbations.
We find that neural language models require local ordering more so than the global ordering of tokens.
arXiv Detail & Related papers (2021-07-29T13:34:20Z) - Constrained Language Models Yield Few-Shot Semantic Parsers [73.50960967598654]
We explore the use of large pretrained language models as few-shot semantics.
The goal in semantic parsing is to generate a structured meaning representation given a natural language input.
We use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation.
arXiv Detail & Related papers (2021-04-18T08:13:06Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Do Neural Language Models Show Preferences for Syntactic Formalisms? [14.388237635684737]
We study the extent to which the semblance of syntactic structure captured by language models adheres to a surface-syntactic or deep syntactic style of analysis.
We apply a probe for extracting directed dependency trees to BERT and ELMo models trained on 13 different languages.
We find that both models exhibit a preference for UD over SUD - with interesting variations across languages and layers.
arXiv Detail & Related papers (2020-04-29T11:37:53Z) - A Hybrid Approach to Dependency Parsing: Combining Rules and Morphology
with Deep Learning [0.0]
We propose two approaches to dependency parsing especially for languages with restricted amount of training data.
Our first approach combines a state-of-the-art deep learning-based with a rule-based approach and the second one incorporates morphological information into the network.
The proposed methods are developed for Turkish, but can be adapted to other languages as well.
arXiv Detail & Related papers (2020-02-24T08:34:33Z) - A Simple Joint Model for Improved Contextual Neural Lemmatization [60.802451210656805]
We present a simple joint neural model for lemmatization and morphological tagging that achieves state-of-the-art results on 20 languages.
Our paper describes the model in addition to training and decoding procedures.
arXiv Detail & Related papers (2019-04-04T02:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.