BayLing: Bridging Cross-lingual Alignment and Instruction Following
through Interactive Translation for Large Language Models
- URL: http://arxiv.org/abs/2306.10968v2
- Date: Wed, 21 Jun 2023 11:31:50 GMT
- Title: BayLing: Bridging Cross-lingual Alignment and Instruction Following
through Interactive Translation for Large Language Models
- Authors: Shaolei Zhang, Qingkai Fang, Zhuocheng Zhang, Zhengrui Ma, Yan Zhou,
Langlin Huang, Mengyu Bu, Shangtong Gui, Yunji Chen, Xilin Chen, Yang Feng
- Abstract summary: Large language models (LLMs) have demonstrated remarkable prowess in language understanding and generation.
We have developed BayLing, an instruction-following LLM by utilizing LLaMA as the foundation LLM.
Demo, homepage, code and models of BayLing are available.
- Score: 39.03467441090675
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large language models (LLMs) have demonstrated remarkable prowess in language
understanding and generation. Advancing from foundation LLMs to
instructionfollowing LLMs, instruction tuning plays a vital role in aligning
LLMs to human preferences. However, the existing LLMs are usually focused on
English, leading to inferior performance in non-English languages. In order to
improve the performance for non-English languages, it is necessary to collect
language-specific training data for foundation LLMs and construct
language-specific instructions for instruction tuning, both of which are heavy
loads. To minimize human workload, we propose to transfer the capabilities of
language generation and instruction following from English to other languages
through an interactive translation task. We have developed BayLing, an
instruction-following LLM by utilizing LLaMA as the foundation LLM and
automatically constructing interactive translation instructions for instructing
tuning. Extensive assessments demonstrate that BayLing achieves comparable
performance to GPT-3.5-turbo, despite utilizing a considerably smaller
parameter size of only 13 billion. Experimental results on translation tasks
show that BayLing achieves 95% of single-turn translation capability compared
to GPT-4 with automatic evaluation and 96% of interactive translation
capability compared to GPT-3.5-turbo with human evaluation. To estimate the
performance on general tasks, we created a multi-turn instruction test set
called BayLing-80. The experimental results on BayLing-80 indicate that BayLing
achieves 89% of performance compared to GPT-3.5-turbo. BayLing also
demonstrates outstanding performance on knowledge assessment of Chinese GaoKao
and English SAT, second only to GPT-3.5-turbo among a multitude of
instruction-following LLMs. Demo, homepage, code and models of BayLing are
available.
Related papers
- MindMerger: Efficient Boosting LLM Reasoning in non-English Languages [26.334092384176518]
Reasoning capabilities are crucial for Large Language Models (LLMs)
We propose MindMerger, which merges LLMs with the external language understanding capabilities from multilingual models.
MindMerger consistently outperforms all baselines, especially in low-resource languages.
arXiv Detail & Related papers (2024-05-27T17:41:54Z) - Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning [57.323716555996114]
Off-target translation remains an unsolved problem, especially for low-resource languages.
Recent works have either designed advanced prompting strategies to highlight the functionality of translation instructions or exploited the in-context learning ability of LLMs.
In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability (especially the translation direction) of LLMs.
arXiv Detail & Related papers (2024-03-21T13:47:40Z) - TaCo: Enhancing Cross-Lingual Transfer for Low-Resource Languages in LLMs through Translation-Assisted Chain-of-Thought Processes [9.254047358707014]
We introduce the Multilingual Instruction-Tuning dataset (MITS), comprised of Alpaca-52K, Dolly-15K, and Vicuna Benchmark translations into 132 languages.
Secondly, we propose a new method called emphTaCo: Translation-Assisted Cross-Linguality, which utilizes translations in a chain-of-thought process to instruction-tune LLMs on new languages through a curriculum-learning process.
Our results indicate that the TaCo method impresses GPT-4 with an 82% score for a low-resource language in the Vicuna Benchmark dataset, doubling the performance in contrast to instruction tuning
arXiv Detail & Related papers (2023-11-17T06:55:32Z) - Improving Translation Faithfulness of Large Language Models via
Augmenting Instructions [89.76691340615848]
We propose SWIE (Segment-Weighted Instruction Embedding) and an instruction-following dataset OVERMISS.
SWIE improves the model instruction understanding by adding a global instruction representation on the following input and response representations.
OVERMISS improves model faithfulness by comparing over-translation and miss-translation results with the correct translation.
arXiv Detail & Related papers (2023-08-24T09:32:29Z) - Okapi: Instruction-tuned Large Language Models in Multiple Languages
with Reinforcement Learning from Human Feedback [61.83548032416181]
We present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages.
Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research.
arXiv Detail & Related papers (2023-07-29T18:01:46Z) - Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts [75.33019401706188]
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
We propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English.
Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages.
arXiv Detail & Related papers (2023-06-20T08:27:47Z) - Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions [68.01449013641532]
Large-scale Pretrained Language Models (LLMs) have shown strong abilities in multilingual translations.
We present a detailed analysis by finetuning a multilingual pretrained language model, XGLM-7B, to perform multilingual translation.
arXiv Detail & Related papers (2023-05-24T12:00:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.