Adapting Definition Modeling for New Languages: A Case Study on Belarusian
- URL: http://arxiv.org/abs/2507.09536v1
- Date: Sun, 13 Jul 2025 08:35:23 GMT
- Title: Adapting Definition Modeling for New Languages: A Case Study on Belarusian
- Authors: Daniela Kazakouskaya, Timothee Mickus, Janine Siewert,
- Abstract summary: We propose a novel dataset of 43,150 definitions in Belarusian.<n>Our experiments demonstrate that adapting a definition modeling systems requires minimal amounts of data, but that there currently are gaps in what automatic metrics do capture.
- Score: 2.2120851074630177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Definition modeling, the task of generating new definitions for words in context, holds great prospect as a means to assist the work of lexicographers in documenting a broader variety of lects and languages, yet much remains to be done in order to assess how we can leverage pre-existing models for as-of-yet unsupported languages. In this work, we focus on adapting existing models to Belarusian, for which we propose a novel dataset of 43,150 definitions. Our experiments demonstrate that adapting a definition modeling systems requires minimal amounts of data, but that there currently are gaps in what automatic metrics do capture.
Related papers
- Linguistic Interpretability of Transformer-based Language Models: a systematic review [1.3194391758295114]
Language models based on the Transformer architecture achieve excellent results in many language-related tasks.<n>However, little is known about how their internal computations help them achieve their results.<n>There is, however, a line of research -- 'interpretability' -- aiming to learn how information is encoded inside these models.
arXiv Detail & Related papers (2025-04-09T08:00:12Z) - Small Language Models Also Work With Small Vocabularies: Probing the Linguistic Abilities of Grapheme- and Phoneme-Based Baby Llamas [7.585433383340306]
We show that tokenization-free, phoneme- and grapheme-based language models can achieve strong linguistic performance.<n>Our findings suggest a promising direction for creating more linguistically plausible language models.
arXiv Detail & Related papers (2024-10-02T12:36:08Z) - Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure [66.33623392497599]
We show that a structure called template-content structure (T-C structure) can reduce the possible space from exponential level to linear level.
We demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic.
arXiv Detail & Related papers (2023-10-09T06:57:45Z) - DeepStruct: Pretraining of Language Models for Structure Prediction [64.84144849119554]
We pretrain language models on a collection of task-agnostic corpora to generate structures from text.
Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks.
We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets.
arXiv Detail & Related papers (2022-05-21T00:58:22Z) - Analyzing the Limits of Self-Supervision in Handling Bias in Language [52.26068057260399]
We evaluate how well language models capture the semantics of four tasks for bias: diagnosis, identification, extraction and rephrasing.
Our analyses indicate that language models are capable of performing these tasks to widely varying degrees across different bias dimensions, such as gender and political affiliation.
arXiv Detail & Related papers (2021-12-16T05:36:08Z) - Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
in Natural Language Processing [78.8500633981247]
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning"
Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly.
arXiv Detail & Related papers (2021-07-28T18:09:46Z) - Specializing Multilingual Language Models: An Empirical Study [50.7526245872855]
Contextualized word representations from pretrained multilingual language models have become the de facto standard for addressing natural language tasks.
For languages rarely or never seen by these models, directly using such models often results in suboptimal representation or use of data.
arXiv Detail & Related papers (2021-06-16T18:13:55Z) - Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
Model [58.27176041092891]
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements.
We propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features from the entangled pretrained cross-lingual representations.
Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts.
arXiv Detail & Related papers (2020-11-23T16:00:42Z) - Toward Cross-Lingual Definition Generation for Language Learners [10.45755551957024]
We propose to generate definitions in English for words in various languages.
Models can be directly applied to other languages after trained on the English dataset.
Experiments and manual analyses show that our models have a strong cross-lingual transfer ability.
arXiv Detail & Related papers (2020-10-12T08:45:28Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z) - Evaluating a Multi-sense Definition Generation Model for Multiple
Languages [1.5229257192293197]
We propose a context-agnostic approach to definition modeling, based on multi-sense word embeddings.
Our results demonstrate that our proposed multi-sense model outperforms a single-sense model on all fifteen datasets.
arXiv Detail & Related papers (2020-06-12T18:15:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.