CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
- URL: http://arxiv.org/abs/2302.05527v2
- Date: Tue, 31 Oct 2023 13:44:36 GMT
- Title: CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code
- Authors: Shuyan Zhou, Uri Alon, Sumit Agarwal, Graham Neubig
- Abstract summary: We propose CodeBERTScore: an evaluation metric for code generation.
CodeBERTScore encodes the natural language input preceding the generated code.
We find that CodeBERTScore achieves a higher correlation with human preference and with functional correctness than all existing metrics.
- Score: 75.08995072899594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the rise of neural natural-language-to-code models (NL->Code) that can
generate long expressions and statements rather than a single next-token, one
of the major problems has been reliably evaluating their generated output. In
this paper, we propose CodeBERTScore: an evaluation metric for code generation,
which builds on BERTScore (Zhang et al., 2020). Instead of encoding only the
generated tokens as in BERTScore, CodeBERTScore also encodes the natural
language input preceding the generated code, thus modeling the consistency
between the generated code and its given natural language context as well. We
perform an extensive evaluation of CodeBERTScore across four programming
languages. We find that CodeBERTScore achieves a higher correlation with human
preference and with functional correctness than all existing metrics. That is,
generated code that receives a higher score by CodeBERTScore is more likely to
be preferred by humans, as well as to function correctly when executed. We
release five language-specific pretrained models to use with our publicly
available code. Our language-specific models have been downloaded more than
1,000,000 times from the Huggingface Hub. Our code and data are available at
https://github.com/neulab/code-bert-score
Related papers
- CodeFusion: A Pre-trained Diffusion Model for Code Generation [17.187094058627615]
Auto-regressive models for code generation from natural language do not easily allow reconsidering earlier tokens generated.
We introduce CodeFusion, a pre-trained diffusion code generation model that addresses this limitation by iteratively denoising a complete program conditioned on the encoded natural language.
Experiments show that CodeFusion performs on par with state-of-the-art auto-regressive systems.
arXiv Detail & Related papers (2023-10-26T11:06:15Z) - Outline, Then Details: Syntactically Guided Coarse-To-Fine Code
Generation [61.50286000143233]
ChainCoder is a program synthesis language model that generates Python code progressively.
A tailored transformer architecture is leveraged to jointly encode the natural language descriptions and syntactically aligned I/O data samples.
arXiv Detail & Related papers (2023-04-28T01:47:09Z) - CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Benchmarking on HumanEval-X [50.008474888951525]
We introduce CodeGeeX, a multilingual model with 13 billion parameters for code generation.
CodeGeeX is pre-trained on 850 billion tokens of 23 programming languages.
arXiv Detail & Related papers (2023-03-30T17:34:01Z) - Tackling Long Code Search with Splitting, Encoding, and Aggregating [67.02322603435628]
We propose a new baseline SEA (Split, Encode and Aggregate) for long code search.
It splits long code into code blocks, encodes these blocks into embeddings, and aggregates them to obtain a comprehensive long code representation.
With GraphCodeBERT as the encoder, SEA achieves an overall mean reciprocal ranking score of 0.785, which is 10.1% higher than GraphCodeBERT on the CodeSearchNet benchmark.
arXiv Detail & Related papers (2022-08-24T02:27:30Z) - CERT: Continual Pre-Training on Sketches for Library-Oriented Code
Generation [46.45445767488915]
We show how to leverage an unlabelled code corpus to train a model for library-oriented code generation.
We craft two benchmarks named PandasEval and NumpyEval to evaluate library-oriented code generation.
arXiv Detail & Related papers (2022-06-14T14:44:34Z) - InCoder: A Generative Model for Code Infilling and Synthesis [88.46061996766348]
We introduce InCoder, a unified generative model that can perform program synthesis (via left-to-right generation) and editing (via infilling)
InCoder is trained to generate code files from a large corpus of permissively licensed code.
Our model is the first generative model that is able to directly perform zero-shot code infilling.
arXiv Detail & Related papers (2022-04-12T16:25:26Z) - LAMNER: Code Comment Generation Using Character Language Model and Named
Entity Recognition [0.7894331610810762]
We present LAnguage Model and Named Entity Recognition (LAMNER)
LAMNER is a code comment generator capable of encoding code constructs effectively and capturing the structural property of a code token.
We evaluate the generated comments from LAMNER and other baselines on a popular Java dataset with four commonly used metrics.
arXiv Detail & Related papers (2022-04-05T20:53:06Z) - A Systematic Evaluation of Large Language Models of Code [88.34057460577957]
Large language models (LMs) of code have recently shown tremendous promise in completing code and synthesizing code from natural language descriptions.
The current state-of-the-art code LMs are not publicly available, leaving many questions about their model and data design decisions.
Although Codex is not open-source, we find that existing open-source models do achieve close results in some programming languages.
We release a new model, PolyCoder, with 2.7B parameters based on the GPT-2 architecture, which was trained on 249GB of code across 12 programming languages on a single machine.
arXiv Detail & Related papers (2022-02-26T15:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.