Cross-lingual Transfer Learning for Javanese Dependency Parsing
- URL: http://arxiv.org/abs/2401.12072v1
- Date: Mon, 22 Jan 2024 16:13:45 GMT
- Title: Cross-lingual Transfer Learning for Javanese Dependency Parsing
- Authors: Fadli Aulawi Al Ghiffari, Ika Alfina, Kurniawati Azizah
- Abstract summary: This study focuses on assessing the efficacy of transfer learning in enhancing dependency parsing for Javanese.
We utilize the Universal Dependencies dataset consisting of dependency treebanks from more than 100 languages, including Javanese.
- Score: 0.20537467311538835
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: While structure learning achieves remarkable performance in high-resource
languages, the situation differs for under-represented languages due to the
scarcity of annotated data. This study focuses on assessing the efficacy of
transfer learning in enhancing dependency parsing for Javanese, a language
spoken by 80 million individuals but characterized by limited representation in
natural language processing. We utilized the Universal Dependencies dataset
consisting of dependency treebanks from more than 100 languages, including
Javanese. We propose two learning strategies to train the model: transfer
learning (TL) and hierarchical transfer learning (HTL). While TL only uses a
source language to pre-train the model, the HTL method uses a source language
and an intermediate language in the learning process. The results show that our
best model uses the HTL method, which improves performance with an increase of
10% for both UAS and LAS evaluations compared to the baseline model.
Related papers
- Enhancing Multilingual Capabilities of Large Language Models through
Self-Distillation from Resource-Rich Languages [60.162717568496355]
Large language models (LLMs) have been pre-trained on multilingual corpora.
Their performance still lags behind in most languages compared to a few resource-rich languages.
arXiv Detail & Related papers (2024-02-19T15:07:32Z) - PLUG: Leveraging Pivot Language in Cross-Lingual Instruction Tuning [46.153828074152436]
We propose a pivot language guided generation approach to enhance instruction tuning in lower-resource languages.
It trains the model to first process instructions in the pivot language, and then produce responses in the target language.
Our approach demonstrates a significant improvement in the instruction-following abilities of LLMs by 29% on average.
arXiv Detail & Related papers (2023-11-15T05:28:07Z) - Analysing Cross-Lingual Transfer in Low-Resourced African Named Entity
Recognition [0.10641561702689348]
We investigate the properties of cross-lingual transfer learning between ten low-resourced languages.
We find that models that perform well on a single language often do so at the expense of generalising to others.
The amount of data overlap between the source and target datasets is a better predictor of transfer performance than either the geographical or genetic distance between the languages.
arXiv Detail & Related papers (2023-09-11T08:56:47Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - An Open Dataset and Model for Language Identification [84.15194457400253]
We present a LID model which achieves a macro-average F1 score of 0.93 and a false positive rate of 0.033 across 201 languages.
We make both the model and the dataset available to the research community.
arXiv Detail & Related papers (2023-05-23T08:43:42Z) - Transfer to a Low-Resource Language via Close Relatives: The Case Study
on Faroese [54.00582760714034]
Cross-lingual NLP transfer can be improved by exploiting data and models of high-resource languages.
We release a new web corpus of Faroese and Faroese datasets for named entity recognition (NER), semantic text similarity (STS) and new language models trained on all Scandinavian languages.
arXiv Detail & Related papers (2023-04-18T08:42:38Z) - Meta-Learning a Cross-lingual Manifold for Semantic Parsing [75.26271012018861]
Localizing a semantic to support new languages requires effective cross-lingual generalization.
We introduce a first-order meta-learning algorithm to train a semantic annotated with maximal sample efficiency during cross-lingual transfer.
Results across six languages on ATIS demonstrate that our combination of steps yields accurate semantics sampling $le$10% of source training data in each new language.
arXiv Detail & Related papers (2022-09-26T10:42:17Z) - Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum
Learning [5.865807597752895]
We adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages.
We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.
arXiv Detail & Related papers (2022-03-16T11:33:20Z) - Learning Natural Language Generation from Scratch [25.984828046001013]
This paper introduces TRUncated ReinForcement Learning for Language (TrufLL)
It is an original ap-proach to train conditional language models from scratch by only using reinforcement learning (RL)
arXiv Detail & Related papers (2021-09-20T08:46:51Z) - Cross-Lingual Adaptation for Type Inference [29.234418962960905]
We propose a cross-lingual adaptation framework, PLATO, to transfer a deep learning-based type inference procedure across weakly typed languages.
By leveraging data from strongly typed languages, PLATO improves the perplexity of the backbone cross-programming-language model.
arXiv Detail & Related papers (2021-07-01T00:20:24Z) - Cross-lingual Machine Reading Comprehension with Language Branch
Knowledge Distillation [105.41167108465085]
Cross-lingual Machine Reading (CLMRC) remains a challenging problem due to the lack of large-scale datasets in low-source languages.
We propose a novel augmentation approach named Language Branch Machine Reading (LBMRC)
LBMRC trains multiple machine reading comprehension (MRC) models proficient in individual language.
We devise a multilingual distillation approach to amalgamate knowledge from multiple language branch models to a single model for all target languages.
arXiv Detail & Related papers (2020-10-27T13:12:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.