POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource
Unsupervised Neural Machine Translation
- URL: http://arxiv.org/abs/2401.05596v2
- Date: Tue, 16 Jan 2024 14:42:45 GMT
- Title: POMP: Probability-driven Meta-graph Prompter for LLMs in Low-resource
Unsupervised Neural Machine Translation
- Authors: Shilong Pan, Zhiliang Tian, Liang Ding, Zhen Huang, Zhihua Wen,
Dongsheng Li
- Abstract summary: Low-resource languages (LRLs) face challenges in supervised neural machine translation due to limited parallel data.
We propose Probability-driven Meta-graph Prompter (POMP) to enhance Large Language Models' translation capabilities for LRLs.
Our experiments show significant improvements in the translation quality of three LRLs.
- Score: 32.76853731410492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-resource languages (LRLs) face challenges in supervised neural machine
translation due to limited parallel data, prompting research into unsupervised
methods. Unsupervised neural machine translation (UNMT) methods, including
back-translation, transfer learning, and pivot-based translation, offer
practical solutions for LRL translation, but they are hindered by issues like
synthetic data noise, language bias, and error propagation, which can
potentially be mitigated by Large Language Models (LLMs). LLMs have advanced
NMT with in-context learning (ICL) and supervised fine-tuning methods, but
insufficient training data results in poor performance in LRLs. We argue that
LLMs can mitigate the linguistic noise with auxiliary languages to improve
translations in LRLs. In this paper, we propose Probability-driven Meta-graph
Prompter (POMP), a novel approach employing a dynamic, sampling-based graph of
multiple auxiliary languages to enhance LLMs' translation capabilities for
LRLs. POMP involves constructing a directed acyclic meta-graph for each source
language, from which we dynamically sample multiple paths to prompt LLMs to
mitigate the linguistic noise and improve translations during training. We use
the BLEURT metric to evaluate the translations and back-propagate rewards,
estimated by scores, to update the probabilities of auxiliary languages in the
paths. Our experiments show significant improvements in the translation quality
of three LRLs, demonstrating the effectiveness of our approach.
Related papers
- Think Carefully and Check Again! Meta-Generation Unlocking LLMs for Low-Resource Cross-Lingual Summarization [108.6908427615402]
Cross-lingual summarization ( CLS) aims to generate a summary for the source text in a different target language.
Currently, instruction-tuned large language models (LLMs) excel at various English tasks.
Recent studies have shown that LLMs' performance on CLS tasks remains unsatisfactory even with few-shot settings.
arXiv Detail & Related papers (2024-10-26T00:39:44Z) - What do Large Language Models Need for Machine Translation Evaluation? [12.42394213466485]
Large language models (LLMs) can achieve results comparable to fine-tuned multilingual pre-trained language models.
This paper explores what translation information, such as the source, reference, translation errors and annotation guidelines, is needed for LLMs to evaluate machine translation quality.
arXiv Detail & Related papers (2024-10-04T09:50:45Z) - LANDeRMT: Detecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation [43.26446958873554]
Large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision.
Recent advancements in large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision.
LandeRMT is a framework that selectively finetunes LLMs to textbfMachine textbfTranslation with diverse translation training data.
arXiv Detail & Related papers (2024-09-29T02:39:42Z) - TasTe: Teaching Large Language Models to Translate through Self-Reflection [82.83958470745381]
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks.
We propose the TasTe framework, which stands for translating through self-reflection.
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
arXiv Detail & Related papers (2024-06-12T17:21:21Z) - Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning [57.323716555996114]
Off-target translation remains an unsolved problem, especially for low-resource languages.
Recent works have either designed advanced prompting strategies to highlight the functionality of translation instructions or exploited the in-context learning ability of LLMs.
In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability (especially the translation direction) of LLMs.
arXiv Detail & Related papers (2024-03-21T13:47:40Z) - TEaR: Improving LLM-based Machine Translation with Systematic Self-Refinement [26.26493253161022]
Large Language Models (LLMs) have achieved impressive results in Machine Translation (MT)
We introduce a systematic LLM-based self-refinement translation framework, named textbfTEaR.
arXiv Detail & Related papers (2024-02-26T07:58:12Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - Adapting Large Language Models for Document-Level Machine Translation [46.370862171452444]
Large language models (LLMs) have significantly advanced various natural language processing (NLP) tasks.
Recent research indicates that moderately-sized LLMs often outperform larger ones after task-specific fine-tuning.
This study focuses on adapting LLMs for document-level machine translation (DocMT) for specific language pairs.
arXiv Detail & Related papers (2024-01-12T09:29:13Z) - Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts [75.33019401706188]
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
We propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English.
Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages.
arXiv Detail & Related papers (2023-06-20T08:27:47Z) - Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis [103.89753784762445]
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT)
This paper systematically investigates the advantages and challenges of LLMs for MMT.
We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4.
arXiv Detail & Related papers (2023-04-10T15:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.