Lost in Translation: A Study of Bugs Introduced by Large Language Models
while Translating Code
- URL: http://arxiv.org/abs/2308.03109v3
- Date: Tue, 16 Jan 2024 11:25:44 GMT
- Title: Lost in Translation: A Study of Bugs Introduced by Large Language Models
while Translating Code
- Authors: Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar,
Lambert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh
Sinha, Reyhaneh Jabbarvand
- Abstract summary: We present a large-scale empirical study to investigate the ability of general LLMs and code LLMs for code translation.
Our study involves the translation of 1,700 code samples from three benchmarks and two real-world projects.
We find that correct translations range from 2.1% to 47.3% for the studied LLMs.
- Score: 5.915447908295047
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Code translation aims to convert source code from one programming language
(PL) to another. Given the promising abilities of large language models (LLMs)
in code synthesis, researchers are exploring their potential to automate code
translation. The prerequisite for advancing the state of LLM-based code
translation is to understand their promises and limitations over existing
techniques. To that end, we present a large-scale empirical study to
investigate the ability of general LLMs and code LLMs for code translation
across pairs of different languages, including C, C++, Go, Java, and Python.
Our study, which involves the translation of 1,700 code samples from three
benchmarks and two real-world projects, reveals that LLMs are yet to be
reliably used to automate code translation -- with correct translations ranging
from 2.1% to 47.3% for the studied LLMs. Further manual investigation of
unsuccessful translations identifies 15 categories of translation bugs. We also
compare LLM-based code translation with traditional non-LLM-based approaches.
Our analysis shows that these two classes of techniques have their own
strengths and weaknesses. Finally, insights from our study suggest that
providing more context to LLMs during translation can help them produce better
results. To that end, we propose a prompt-crafting approach based on the
symptoms of erroneous translations; this improves the performance of LLM-based
code translation by 5.5% on average. Our study is the first of its kind, in
terms of scale and breadth, that provides insights into the current limitations
of LLMs in code translation and opportunities for improving them. Our dataset
-- consisting of 1,700 code samples in five PLs with 10K+ tests, 43K+
translated code, 1,748 manually labeled bugs, and 1,365 bug-fix pairs -- can
help drive research in this area.
Related papers
- Language Models and Cycle Consistency for Self-Reflective Machine Translation [1.79487674052027]
We generate multiple translation candidates from a source language A to a target language B, and subsequently translate these candidates back to the original language A.
By evaluating the cycle consistency between the original and back-translated sentences using metrics such as token-level precision and accuracy, we implicitly estimate the translation quality in language B.
For each source sentence, we identify the translation candidate with optimal cycle consistency with the original sentence as the final answer.
arXiv Detail & Related papers (2024-11-05T04:01:41Z) - Unraveling the Potential of Large Language Models in Code Translation: How Far Are We? [4.616570111453259]
Large language models (LLMs) exhibit state-of-the-art performance in various tasks, but struggle for code translation.
We conduct a large-scale empirical study to exploit the capabilities and incapabilities of LLMs in code translation tasks.
We propose two methods: (1) intermediary translation which selects an intermediary language between the source and target ones; and (2) self-training which fine-tunes LLMs on self-generated parallel data.
arXiv Detail & Related papers (2024-10-13T12:20:12Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - TasTe: Teaching Large Language Models to Translate through Self-Reflection [82.83958470745381]
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks.
We propose the TasTe framework, which stands for translating through self-reflection.
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
arXiv Detail & Related papers (2024-06-12T17:21:21Z) - Towards Translating Real-World Code with LLMs: A Study of Translating to Rust [13.743967357458287]
Large language models (LLMs) show promise in code translation due to their ability to write code in most programming languages.
We conduct our study on code extracted from real-world open source projects.
FLOURINE is an end-to-end code translation tool that uses differential fuzzing to check if a Rust translation is I/O equivalent to the original source program.
arXiv Detail & Related papers (2024-05-19T10:54:03Z) - Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning [57.323716555996114]
Off-target translation remains an unsolved problem, especially for low-resource languages.
Recent works have either designed advanced prompting strategies to highlight the functionality of translation instructions or exploited the in-context learning ability of LLMs.
In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability (especially the translation direction) of LLMs.
arXiv Detail & Related papers (2024-03-21T13:47:40Z) - Large Language Model-Aware In-Context Learning for Code Generation [75.68709482932903]
Large language models (LLMs) have shown impressive in-context learning (ICL) ability in code generation.
We propose a novel learning-based selection approach named LAIL (LLM-Aware In-context Learning) for code generation.
arXiv Detail & Related papers (2023-10-15T06:12:58Z) - Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis [103.89753784762445]
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT)
This paper systematically investigates the advantages and challenges of LLMs for MMT.
We thoroughly evaluate eight popular LLMs, including ChatGPT and GPT-4.
arXiv Detail & Related papers (2023-04-10T15:51:30Z) - LEVER: Learning to Verify Language-to-Code Generation with Execution [64.36459105535]
We propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results.
Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci) and achieves new state-of-the-art results on all of them.
arXiv Detail & Related papers (2023-02-16T18:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.