Can ChatGPT replace StackOverflow? A Study on Robustness and Reliability
of Large Language Model Code Generation
- URL: http://arxiv.org/abs/2308.10335v5
- Date: Sat, 27 Jan 2024 05:49:55 GMT
- Title: Can ChatGPT replace StackOverflow? A Study on Robustness and Reliability
of Large Language Model Code Generation
- Authors: Li Zhong, Zilong Wang
- Abstract summary: Large language models (LLMs) have shown extraordinary ability in understanding natural language and generating programming code.
The misuse of APIs in the generated code could lead to severe problem, such as resource leaks, program crashes.
- Score: 8.575560293086289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the large language models (LLMs) have shown extraordinary ability
in understanding natural language and generating programming code. It has been
a common practice of software engineers to consult LLMs when encountering
coding questions. Although efforts have been made to avoid syntax errors and
align the code with the intended semantics, the reliability and robustness of
the code generationfrom LLMs have not yet been thoroughly studied. The
executable code is not equivalent to the reliable and robust code, especially
in the context of real-world software development. The misuse of APIs in the
generated code could lead to severe problem, such as resource leaks, program
crashes. To make things worse, the users of LLM code generation services are
actually the developers that are most vulnerable to these code that seems right
-- They are always novice developers that are not familiar with the APIs that
LLMs generate code for them. Therefore, they could hardly tell the misuse in
the code generated by LLMs, which further facilitates the incorrect code
applied in real-world software. Existing code evaluation benchmark and datasets
focus on crafting small tasks such as programming questions in coding
interviews, which however deviates from the problem that developers would ask
LLM for real-world coding help. To fill the missing piece, in this work, we
propose a dataset RobustAPI for evaluating the reliability and robustness of
code generated by LLMs. We collect 1208 coding questions from StackOverflow on
24 representative Java APIs. We summarize thecommon misuse patterns of these
APIs and evaluate them oncurrent popular LLMs. The evaluation results show that
evenfor GPT-4, 62% of the generated code contains API misuses,which would cause
unexpected consequences if the code isintroduced into real-world software.
Related papers
- A Deep Dive Into Large Language Model Code Generation Mistakes: What and Why? [9.246899995643918]
Large Language Models can still generate defective code that deviates from the specification.
Seven categories of non-syntactic mistakes were identified through extensive manual analyses.
Our evaluation demonstrated that GPT-4 with the ReAct prompting technique can achieve an F1 score of up to 0.65 when identifying reasons for LLM's mistakes.
arXiv Detail & Related papers (2024-11-03T02:47:03Z) - Artificial-Intelligence Generated Code Considered Harmful: A Road Map for Secure and High-Quality Code Generation [2.793781561647737]
We compared the security and quality of human-written code with that of LLM-generated code.
We found that LLM can generate incorrect code that fails to implement the required functionality.
Flukeing has revealed that LLM-generated code is more prone to hangs and crashes than human-written code.
arXiv Detail & Related papers (2024-09-27T23:41:51Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Bugs in Large Language Models Generated Code: An Empirical Study [12.625305075672456]
Large Language Models (LLMs) for code have gained significant attention recently.
Similar to human-written code, LLM-generated code is prone to bugs.
This paper examines a sample of 333 bugs collected from code generated using three leading LLMs.
arXiv Detail & Related papers (2024-03-13T20:12:01Z) - InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models [56.723509505549536]
InfiBench is the first large-scale freeform question-answering (QA) benchmark for code to our knowledge.
It comprises 234 carefully selected high-quality Stack Overflow questions that span across 15 programming languages.
We conduct a systematic evaluation for over 100 latest code LLMs on InfiBench, leading to a series of novel and insightful findings.
arXiv Detail & Related papers (2024-03-11T02:06:30Z) - Assured LLM-Based Software Engineering [51.003878077888686]
This paper is an outline of the content of the keynote by Mark Harman at the International Workshop on Interpretability, Robustness, and Benchmarking in Neural Software Engineering, Monday 15th April 2024, Lisbon, Portugal.
arXiv Detail & Related papers (2024-02-06T20:38:46Z) - Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs [65.2379940117181]
We introduce code prompting, a chain of prompts that transforms a natural language problem into code.
We find that code prompting exhibits a high-performance boost for multiple LLMs.
Our analysis of GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement.
arXiv Detail & Related papers (2024-01-18T15:32:24Z) - Large Language Models Should Ask Clarifying Questions to Increase
Confidence in Generated Code [0.7252027234425334]
Large language models (LLMs) have significantly improved the ability to perform tasks in the field of code generation.
There is still a gap between LLMs being capable coders and being top-tier software engineers.
I propose a communication-centered process that uses an LLM-generated communicator to identify issues with high ambiguity or low confidence in problem descriptions and generated code.
arXiv Detail & Related papers (2023-08-25T17:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.