CompCodeVet: A Compiler-guided Validation and Enhancement Approach for
Code Dataset
- URL: http://arxiv.org/abs/2311.06505v1
- Date: Sat, 11 Nov 2023 08:21:52 GMT
- Title: CompCodeVet: A Compiler-guided Validation and Enhancement Approach for
Code Dataset
- Authors: Le Chen, Arijit Bhattacharjee, Nesreen K. Ahmed, Niranjan Hasabnis,
Gal Oren, Bin Lei, Ali Jannesari
- Abstract summary: Even models with billions of parameters face challenges in tasks demanding multi-step reasoning.
CompCodeVet is a compiler-guided CoT approach to produce compilable code from non-compilable ones.
- Score: 12.58750209611099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have become increasingly prominent in academia
and industry due to their remarkable performance in diverse applications. As
these models evolve with increasing parameters, they excel in tasks like
sentiment analysis and machine translation. However, even models with billions
of parameters face challenges in tasks demanding multi-step reasoning. Code
generation and comprehension, especially in C and C++, emerge as significant
challenges. While LLMs trained on code datasets demonstrate competence in many
tasks, they struggle with rectifying non-compilable C and C++ code. Our
investigation attributes this subpar performance to two primary factors: the
quality of the training dataset and the inherent complexity of the problem
which demands intricate reasoning. Existing "Chain of Thought" (CoT) prompting
techniques aim to enhance multi-step reasoning. This approach, however, retains
the limitations associated with the latent drawbacks of LLMs. In this work, we
propose CompCodeVet, a compiler-guided CoT approach to produce compilable code
from non-compilable ones. Diverging from the conventional approach of utilizing
larger LLMs, we employ compilers as a teacher to establish a more robust
zero-shot thought process. The evaluation of CompCodeVet on two open-source
code datasets shows that CompCodeVet has the ability to improve the training
dataset quality for LLMs.
Related papers
- SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors [0.0]
Large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as code understanding and code generation.
However, an equally important yet underexplored question is whether LLMs can serve as general-purpose surrogate code executors.
This study provides empirical insights into the feasibility of using LLMs as surrogate code executors.
arXiv Detail & Related papers (2025-02-16T15:38:19Z) - Pseudocode-Injection Magic: Enabling LLMs to Tackle Graph Computational Tasks [15.69049038121735]
Graph computational tasks are inherently challenging and often demand advanced algorithms for effective solutions.
Existing approaches are constrained by large language models' limited capability to comprehend complex graph structures.
We introduce a novel framework, PIE, which consists of three key steps: problem understanding, prompt design, and code generation.
arXiv Detail & Related papers (2025-01-23T15:04:22Z) - OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models [70.72097493954067]
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.
While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs remain limited.
We introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an "open cookbook" for the research community.
arXiv Detail & Related papers (2024-11-07T17:47:25Z) - Case2Code: Scalable Synthetic Data for Code Generation [105.89741089673575]
Large Language Models (LLMs) have shown outstanding breakthroughs in code generation.
Recent work improves code LLMs by training on synthetic data generated by some powerful LLMs.
We propose a textbfCase2Code task by exploiting the expressiveness and correctness of programs.
arXiv Detail & Related papers (2024-07-17T11:35:00Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - CodeT5+: Open Code Large Language Models for Code Understanding and
Generation [72.1638273937025]
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence.
CodeT5+ is a family of encoder-decoder LLMs for code in which component modules can be flexibly combined to suit a wide range of downstream code tasks.
We extensively evaluate CodeT5+ on over 20 code-related benchmarks in different settings, including zero-shot, finetuning, and instruction-tuning.
arXiv Detail & Related papers (2023-05-13T14:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.