Improving ChatGPT Prompt for Code Generation
- URL: http://arxiv.org/abs/2305.08360v1
- Date: Mon, 15 May 2023 05:37:33 GMT
- Title: Improving ChatGPT Prompt for Code Generation
- Authors: Chao Liu, Xuanlin Bao, Hongyu Zhang, Neng Zhang, Haibo Hu, Xiaohong
Zhang, Meng Yan
- Abstract summary: OpenAI's language model ChatGPT has emerged as a powerful tool for generating human-like responses to a wide range of textual inputs.
We evaluate ChatGPT's capabilities for two code generation tasks, including text-to-code and code-to-code generation.
Our results showed that by carefully designing prompts to guide ChatGPT, the generation performance can be improved substantially.
- Score: 13.303599826870705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated code generation can be a powerful technique for software
development, significantly reducing developers' efforts and time required to
create new code by generating it automatically based on requirements. Recently,
OpenAI's language model ChatGPT has emerged as a powerful tool for generating
human-like responses to a wide range of textual inputs (i.e., prompts),
including those related to code generation. However, the effectiveness of
ChatGPT for code generation is not well understood, and the generation
performance could be heavily influenced by the choice of prompt. To answer
these questions, we conducted experiments using the CodeXGlue dataset to
evaluate ChatGPT's capabilities for two code generation tasks, including
text-to-code and code-to-code generation. We designed prompts by leveraging the
chain-of-thought strategy with multi-step optimizations. Our results showed
that by carefully designing prompts to guide ChatGPT, the generation
performance can be improved substantially. We also analyzed the factors that
influenced the prompt design and provided insights that could guide future
research.
Related papers
- You Augment Me: Exploring ChatGPT-based Data Augmentation for Semantic Code Search [47.54163552754051]
Code search plays a crucial role in software development, enabling developers to retrieve and reuse code using natural language queries.
Recently, large language models (LLMs) have made remarkable progress in both natural and programming language understanding and generation.
We propose a novel approach ChatDANCE, which utilizes high-quality and diverse augmented data generated by a large language model.
arXiv Detail & Related papers (2024-08-10T12:51:21Z) - CodeRAG-Bench: Can Retrieval Augment Code Generation? [78.37076502395699]
We conduct a systematic, large-scale analysis of code generation using retrieval-augmented generation.
We first curate a comprehensive evaluation benchmark, CodeRAG-Bench, encompassing three categories of code generation tasks.
We examine top-performing models on CodeRAG-Bench by providing contexts retrieved from one or multiple sources.
arXiv Detail & Related papers (2024-06-20T16:59:52Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Exploring the Potential of ChatGPT in Automated Code Refinement: An
Empirical Study [0.0]
ChatGPT, a cutting-edge language model, has demonstrated impressive performance in various natural language processing tasks.
We conduct the first empirical study to understand the capabilities of ChatGPT in code review tasks.
Our results show that ChatGPT achieves higher EM and BLEU scores of 22.78 and 76.44 respectively, while the state-of-the-art method achieves only 15.50 and 62.88 on a high-quality code review dataset.
arXiv Detail & Related papers (2023-09-15T07:41:33Z) - Reverse-Engineering Decoding Strategies Given Blackbox Access to a
Language Generation System [73.52878118434147]
We present methods to reverse-engineer the decoding method used to generate text.
Our ability to discover which decoding strategy was used has implications for detecting generated text.
arXiv Detail & Related papers (2023-09-09T18:19:47Z) - No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT [28.68768157452352]
This study examines the quality of code generation using ChatGPT.
We leverage 728 algorithm problems in five languages (i.e., C, C++, Java, Python, and JavaScript) and 18 CWEs with 54 code scenarios for the code generation task.
Our findings uncover potential issues and limitations that arise in the ChatGPT-based code generation.
arXiv Detail & Related papers (2023-08-09T10:01:09Z) - Unmasking the giant: A comprehensive evaluation of ChatGPT's proficiency in coding algorithms and data structures [0.6990493129893112]
We evaluate ChatGPT's ability to generate correct solutions to the problems fed to it, its code quality, and nature of run-time errors thrown by its code.
We look into patterns in the test cases passed in order to gain some insights into how wrong ChatGPT code is in these kinds of situations.
arXiv Detail & Related papers (2023-07-10T08:20:34Z) - Think Outside the Code: Brainstorming Boosts Large Language Models in
Code Generation [9.904734169174356]
In this paper, we introduce Brainstorm framework for code generation.
It leverages a brainstorming step that generates and selects diverse thoughts on the problem.
Brainstorm significantly enhances the ability of LLMs to solve competition-level programming problems.
arXiv Detail & Related papers (2023-05-18T03:32:54Z) - ReCode: Robustness Evaluation of Code Generation Models [90.10436771217243]
We propose ReCode, a comprehensive robustness evaluation benchmark for code generation models.
We customize over 30 transformations specifically for code on docstrings, function and variable names, code syntax, and code format.
With human annotators, we verified that over 90% of the perturbed prompts do not alter the semantic meaning of the original prompt.
arXiv Detail & Related papers (2022-12-20T14:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.