Knowledge Transfer from High-Resource to Low-Resource Programming
Languages for Code LLMs
- URL: http://arxiv.org/abs/2308.09895v5
- Date: Sat, 10 Feb 2024 18:40:31 GMT
- Title: Knowledge Transfer from High-Resource to Low-Resource Programming
Languages for Code LLMs
- Authors: Federico Cassano, John Gouwar, Francesca Lucchetti, Claire
Schlesinger, Anders Freeman, Carolyn Jane Anderson, Molly Q Feldman, Michael
Greenberg, Abhinav Jangda, Arjun Guha
- Abstract summary: This paper presents an effective approach for boosting the performance of Code LLMs on low-resource languages using semi-synthetic data.
We apply this approach to generate tens of thousands of validated training items for Julia, Lua, OCaml, R, and Racket.
- Score: 3.0215424372240265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past few years, Large Language Models of Code (Code LLMs) have
started to have a significant impact on programming practice. Code LLMs are
also emerging as building blocks for research in programming languages and
software engineering. However, Code LLMs produce impressive results on
programming languages that are well represented in their training data (e.g.,
Java, Python, or JavaScript), but struggle with low-resource languages that
have limited training data available. Low resource languages include OCaml,
Racket, and several others.
This paper presents an effective approach for boosting the performance of
Code LLMs on low-resource languages using semi-synthetic data. Our approach,
MultiPL-T, translates training data from high-resource languages into training
data for low-resource languages in the following way. 1) We use a Code LLM to
synthesize tests for commented code from a high-resource language, filtering
out faulty tests and code with low test coverage. 2) We use a Code LLM to
translate Python code to a target low-resource language, and use tests to
validate the translation. We apply this approach to generate tens of thousands
of validated training items for Julia, Lua, OCaml, R, and Racket. Furthermore,
we use an open model (StarCoderBase) with open training data (The Stack), which
allows us to decontaminate benchmarks, train models without violating licenses,
and run experiments that could not otherwise be done.
With MultiPL-T generated data, we present fine-tuned versions of
StarCoderBase and Code Llama for Julia, Lua, OCaml, R, and Racket. On
established benchmarks (MultiPL-E), these models outperform other open Code
LLMs. The MultiPL-T approach is easy to apply to new languages, and is
significantly more efficient and effective than alternatives such as training
longer.
Related papers
- Crystal: Illuminating LLM Abilities on Language and Code [58.5467653736537]
We propose a pretraining strategy to enhance the integration of natural language and coding capabilities.
The resulting model, Crystal, demonstrates remarkable capabilities in both domains.
arXiv Detail & Related papers (2024-11-06T10:28:46Z) - Think Carefully and Check Again! Meta-Generation Unlocking LLMs for Low-Resource Cross-Lingual Summarization [108.6908427615402]
Cross-lingual summarization ( CLS) aims to generate a summary for the source text in a different target language.
Currently, instruction-tuned large language models (LLMs) excel at various English tasks.
Recent studies have shown that LLMs' performance on CLS tasks remains unsatisfactory even with few-shot settings.
arXiv Detail & Related papers (2024-10-26T00:39:44Z) - Unraveling the Potential of Large Language Models in Code Translation: How Far Are We? [4.616570111453259]
Large language models (LLMs) exhibit state-of-the-art performance in various tasks, but struggle for code translation.
We conduct a large-scale empirical study to exploit the capabilities and incapabilities of LLMs in code translation tasks.
We propose two methods: (1) intermediary translation which selects an intermediary language between the source and target ones; and (2) self-training which fine-tunes LLMs on self-generated parallel data.
arXiv Detail & Related papers (2024-10-13T12:20:12Z) - Empirical Studies of Parameter Efficient Methods for Large Language Models of Code and Knowledge Transfer to R [1.9799527196428242]
Large Langauge Models (LLMs) have gained a lot of attention in the Software Engineering (SE) community.
In this work, we empirically study PEFT methods, LoRA and Compacter, on CodeT5 and CodeLlama.
We will assess their performance compared to fully fine-tuned models, whether they can be used for knowledge transfer from natural language models to code, and their ability to adapt the learned knowledge to an unseen language.
arXiv Detail & Related papers (2024-03-16T03:12:45Z) - Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages [0.0]
Large Language Models (LLMs) have shown incredible proficiency at natural language processing tasks.
LLMs often struggle to perform well on low-resource languages because there is so little training data available.
In this work, we explore training LLaMA-2 to speak Amharic, a language which is spoken by over 50 million people world wide.
arXiv Detail & Related papers (2024-03-11T01:04:36Z) - Okapi: Instruction-tuned Large Language Models in Multiple Languages
with Reinforcement Learning from Human Feedback [61.83548032416181]
We present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages.
Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research.
arXiv Detail & Related papers (2023-07-29T18:01:46Z) - The potential of LLMs for coding with low-resource and domain-specific
programming languages [0.0]
This study focuses on the econometric scripting language named hansl of the open-source software gretl.
Our findings suggest that LLMs can be a useful tool for writing, understanding, improving, and documenting gretl code.
arXiv Detail & Related papers (2023-07-24T17:17:13Z) - Democratizing LLMs for Low-Resource Languages by Leveraging their English Dominant Abilities with Linguistically-Diverse Prompts [75.33019401706188]
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
We propose to assemble synthetic exemplars from a diverse set of high-resource languages to prompt the LLMs to translate from any language into English.
Our unsupervised prompting method performs on par with supervised few-shot learning in LLMs of different sizes for translations between English and 13 Indic and 21 African low-resource languages.
arXiv Detail & Related papers (2023-06-20T08:27:47Z) - LEVER: Learning to Verify Language-to-Code Generation with Execution [64.36459105535]
We propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results.
Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci) and achieves new state-of-the-art results on all of them.
arXiv Detail & Related papers (2023-02-16T18:23:22Z) - X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained
Language Models [103.75890012041366]
Language models (LMs) have proven surprisingly successful at capturing factual knowledge.
However, studies on LMs' factual representation ability have almost invariably been performed on English.
We create a benchmark of cloze-style probes for 23 typologically diverse languages.
arXiv Detail & Related papers (2020-10-13T05:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.