Exploring the Robustness of Large Language Models for Solving
Programming Problems
- URL: http://arxiv.org/abs/2306.14583v1
- Date: Mon, 26 Jun 2023 10:48:50 GMT
- Title: Exploring the Robustness of Large Language Models for Solving
Programming Problems
- Authors: Atsushi Shirafuji, Yutaka Watanobe, Takumi Ito, Makoto Morishita, Yuki
Nakamura, Yusuke Oda, Jun Suzuki
- Abstract summary: We conduct experiments to understand the robustness of several popular large language models (LLMs) for source code generation.
Our results show that CodeGen and Codex are sensitive to the superficial modifications of problem descriptions and significantly impact code generation performance.
The state-of-the-art (SOTA) models, such as InstructGPT and ChatGPT, show higher robustness to superficial modifications and have an outstanding capability for solving programming problems.
- Score: 15.80687717725775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using large language models (LLMs) for source code has recently gained
attention. LLMs, such as Transformer-based models like Codex and ChatGPT, have
been shown to be highly capable of solving a wide range of programming
problems. However, the extent to which LLMs understand problem descriptions and
generate programs accordingly or just retrieve source code from the most
relevant problem in training data based on superficial cues has not been
discovered yet. To explore this research question, we conduct experiments to
understand the robustness of several popular LLMs, CodeGen and GPT-3.5 series
models, capable of tackling code generation tasks in introductory programming
problems. Our experimental results show that CodeGen and Codex are sensitive to
the superficial modifications of problem descriptions and significantly impact
code generation performance. Furthermore, we observe that Codex relies on
variable names, as randomized variables decrease the solved rate significantly.
However, the state-of-the-art (SOTA) models, such as InstructGPT and ChatGPT,
show higher robustness to superficial modifications and have an outstanding
capability for solving programming problems. This highlights the fact that
slight modifications to the prompts given to the LLMs can greatly affect code
generation performance, and careful formatting of prompts is essential for
high-quality code generation, while the SOTA models are becoming more robust to
perturbations.
Related papers
- Can OpenSource beat ChatGPT? -- A Comparative Study of Large Language Models for Text-to-Code Generation [0.24578723416255752]
We evaluate five different large language models (LLMs) concerning their capabilities for text-to-code generation.
ChatGPT can handle these typical programming challenges by far the most effectively, surpassing even code-specialized models like Code Llama.
arXiv Detail & Related papers (2024-09-06T10:03:49Z) - An Empirical Study on Self-correcting Large Language Models for Data Science Code Generation [1.335664823620186]
Large Language Models (LLMs) have recently advanced many applications on software engineering tasks.
CoT-SelfEvolve iteratively and automatically refines code through a self-correcting process.
arXiv Detail & Related papers (2024-08-28T09:19:09Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs [57.27982780697922]
Large language models have demonstrated exceptional capability in natural language understanding and generation.
However, their generation speed is limited by the inherently sequential nature of their decoding process.
This paper introduces Lexical Unit Decoding, a novel decoding methodology implemented in a data-driven manner.
arXiv Detail & Related papers (2024-05-24T04:35:13Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Knowledge-Aware Code Generation with Large Language Models [34.806454393643236]
Large Language Models (LLMs) perform well on basic programming problems.
However, they encounter challenges when dealing with complex tasks involving the use of diverse algorithmic and data structure skills.
We develop a Knowledge Library tailored for Python programming contest problems and introduce the concept of Knowledge-Aware Code Generation.
arXiv Detail & Related papers (2024-01-29T08:01:22Z) - LLM-Assisted Code Cleaning For Training Accurate Code Generators [53.087019724256606]
We investigate data quality for code and find that making the code more structured and readable leads to improved code generation performance of the system.
We build a novel data-cleaning pipeline that uses these principles to transform existing programs.
We evaluate our approach on two challenging algorithmic code generation benchmarks and find that fine-tuning CodeLLaMa-7B improves the performance by up to 30% compared to fine-tuning on the original dataset.
arXiv Detail & Related papers (2023-11-25T02:45:50Z) - Benchmarking and Explaining Large Language Model-based Code Generation:
A Causality-Centric Approach [12.214585409361126]
Large language models (LLMs)- based code generation is a complex and powerful black-box model.
We propose a novel causal graph-based representation of the prompt and the generated code.
We illustrate the insights that our framework can provide by studying over 3 popular LLMs with over 12 prompt adjustment strategies.
arXiv Detail & Related papers (2023-10-10T14:56:26Z) - Test-Case-Driven Programming Understanding in Large Language Models for
Better Code Generation [15.166827643436346]
muFiX is a novel prompting technique to improve the code generation performance of large language models (LLMs)
It first exploits test case analysis to obtain specification understanding and enables a self-improvement process.
muFiX further fixes the specification understanding towards the direction reducing the gap between the provided understanding and the actual understanding.
arXiv Detail & Related papers (2023-09-28T02:58:07Z) - Simultaneous Machine Translation with Large Language Models [51.470478122113356]
We investigate the possibility of applying Large Language Models to SimulMT tasks.
We conducted experiments using the textttLlama2-7b-chat model on nine different languages from the MUST-C dataset.
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
arXiv Detail & Related papers (2023-09-13T04:06:47Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.