Read it in Two Steps: Translating Extremely Low-Resource Languages with Code-Augmented Grammar Books
- URL: http://arxiv.org/abs/2506.01796v1
- Date: Mon, 02 Jun 2025 15:36:37 GMT
- Title: Read it in Two Steps: Translating Extremely Low-Resource Languages with Code-Augmented Grammar Books
- Authors: Chen Zhang, Jiuheng Lin, Xiao Liu, Zekai Zhang, Yansong Feng,
- Abstract summary: This paper investigates the role of grammar books in translating extremely low-resource languages by decomposing it into two key steps: grammar rule retrieval and application.<n>Our experiments show that using code rules significantly boosts both rule retrieval and application, ultimately resulting in a 13.1% BLEU improvement in translation.
- Score: 30.608065466078227
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While large language models (LLMs) have shown promise in translating extremely low-resource languages using resources like dictionaries, the effectiveness of grammar books remains debated. This paper investigates the role of grammar books in translating extremely low-resource languages by decomposing it into two key steps: grammar rule retrieval and application. To facilitate the study, we introduce ZhuangRules, a modularized dataset of grammar rules and their corresponding test sentences. Our analysis reveals that rule retrieval constitutes a primary bottleneck in grammar-based translation. Moreover, although LLMs can apply simple rules for translation when explicitly provided, they encounter difficulties in handling more complex rules. To address these challenges, we propose representing grammar rules as code functions, considering their similarities in structure and the benefit of code in facilitating LLM reasoning. Our experiments show that using code rules significantly boosts both rule retrieval and application, ultimately resulting in a 13.1% BLEU improvement in translation.
Related papers
- Function-to-Style Guidance of LLMs for Code Translation [59.487054943812836]
We propose F2STrans, a function-to-style guiding paradigm designed to improve the performance of large language models in code translation.<n>Our approach comprises two key stages: (1) Functional learning, which optimize translation correctness using high-quality source-target code pairs.<n>We introduce a novel code translation benchmark that includes up-to-date source code, extensive test cases, and manually annotated ground-truth translations.
arXiv Detail & Related papers (2025-07-15T08:25:02Z) - Can LLMs Help Create Grammar?: Automating Grammar Creation for Endangered Languages with In-Context Learning [0.0]
This paper explores how Large Language Models (LLMs) can assist in generating grammatical information for low-resource languages with limited amount of data.<n>Our methodology involves organising the existing linguistic data and prompting to efficiently enable to generate formal XLE grammar.<n>This study highlights the potential of LLMs to enhance language documentation efforts, providing a cost-effective solution for generating linguistic data and contributing to the preservation of endangered languages.
arXiv Detail & Related papers (2024-12-14T20:43:12Z) - GrammaMT: Improving Machine Translation with Grammar-Informed In-Context Learning [8.231635424652126]
GrammaMT is a grammatically-aware prompting approach for machine translation that uses Interlinear Glossed Text (IGT)<n>GrammaMT proposes three prompting strategies: gloss-shot, chain-gloss and model-gloss.
arXiv Detail & Related papers (2024-10-24T12:56:01Z) - Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book? [6.905647501099997]
We investigate the source of this translation ability, finding almost all improvements stem from the book's parallel examples.<n>We find similar results for Nepali and Guarani, seen low-resource languages.<n>We emphasise the importance of task-appropriate data for XLR languages.
arXiv Detail & Related papers (2024-09-27T21:27:32Z) - Learning-From-Mistakes Prompting for Indigenous Language Translation [3.7790255156708397]
This paper presents techniques to improve extremely low-resourced indigenous language translations.
Our approaches are grounded in the use of a datastore consisting of a limited number of parallel translation examples.
We harness the potential of LLMs and in-context learning techniques in such a setting for using LLMs as universal translators.
arXiv Detail & Related papers (2024-07-18T09:41:20Z) - Understanding and Mitigating Language Confusion in LLMs [76.96033035093204]
We evaluate 15 typologically diverse languages with existing and newly-created English and multilingual prompts.<n>We find that Llama Instruct and Mistral models exhibit high degrees of language confusion.<n>We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning.
arXiv Detail & Related papers (2024-06-28T17:03:51Z) - Sparse Logistic Regression with High-order Features for Automatic Grammar Rule Extraction from Treebanks [6.390468088226495]
We propose a new method to extract and explore significant fine-grained grammar patterns from treebanks.
We extract descriptions and rules across different languages for two linguistic phenomena, agreement and word order.
Our method captures both well-known and less well-known significant grammar rules in Spanish, French, and Wolof.
arXiv Detail & Related papers (2024-03-26T09:39:53Z) - DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation [57.07295906718989]
Constrained decoding approaches aim to control the meaning or style of text generated by pre-trained large language (Ms also PLMs) for various tasks at inference time.<n>These methods often guide plausible continuations by greedily and explicitly selecting targets.<n>Inspired by cognitive dual-process theory, we propose a novel decoding framework DECIDER.
arXiv Detail & Related papers (2024-03-04T11:49:08Z) - Self-Augmented In-Context Learning for Unsupervised Word Translation [23.495503962839337]
Large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups.
We propose self-augmented in-context learning (SAIL) for unsupervised BLI.
Our method shows substantial gains over zero-shot prompting of LLMs on two established BLI benchmarks.
arXiv Detail & Related papers (2024-02-15T15:43:05Z) - Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts [75.33019401706188]
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
We propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English.
Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages.
arXiv Detail & Related papers (2023-06-20T08:27:47Z) - Grammar Prompting for Domain-Specific Language Generation with Large
Language Models [40.831045850285776]
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a handful of in-context examples.
We propose emphgrammar prompting, a simple approach to enable LLMs to use external knowledge and domain-specific constraints.
arXiv Detail & Related papers (2023-05-30T17:26:01Z) - Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis [103.89753784762445]
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT)
This paper systematically investigates the advantages and challenges of LLMs for MMT.
We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4.
arXiv Detail & Related papers (2023-04-10T15:51:30Z) - CLSE: Corpus of Linguistically Significant Entities [58.29901964387952]
We release a Corpus of Linguistically Significant Entities (CLSE) annotated by experts.
CLSE covers 74 different semantic types to support various applications from airline ticketing to video games.
We create a linguistically representative NLG evaluation benchmark in three languages: French, Marathi, and Russian.
arXiv Detail & Related papers (2022-11-04T12:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.