WizardCoder: Empowering Code Large Language Models with Evol-Instruct
- URL: http://arxiv.org/abs/2306.08568v1
- Date: Wed, 14 Jun 2023 15:18:48 GMT
- Title: WizardCoder: Empowering Code Large Language Models with Evol-Instruct
- Authors: Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu,
Chongyang Tao, Jing Ma, Qingwei Lin, Daxin Jiang
- Abstract summary: We introduce WizardCoder, which empowers Code LLMs with complex instruction fine-tuning.
Our model surpasses all other open-source Code LLMs by a substantial margin.
- Score: 67.24653703564492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated
exceptional performance in code-related tasks. However, most existing models
are solely pre-trained on extensive raw code data without instruction
fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs
with complex instruction fine-tuning, by adapting the Evol-Instruct method to
the domain of code. Through comprehensive experiments on four prominent code
generation benchmarks, namely HumanEval, HumanEval+, MBPP, and DS-1000, we
unveil the exceptional capabilities of our model. It surpasses all other
open-source Code LLMs by a substantial margin. Moreover, our model even
outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on
HumanEval and HumanEval+. Our code, model weights, and data are public at
https://github.com/nlpxucan/WizardLM
Related papers
- OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models [70.72097493954067]
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning, tasks and agent systems.
We introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an open cookbook'' for the research community.
arXiv Detail & Related papers (2024-11-07T17:47:25Z) - AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data [64.69872638349922]
We present AlchemistCoder, a series of Code LLMs with enhanced code generation and generalization capabilities fine-tuned on multi-source data.
We propose incorporating the data construction process into the fine-tuning data as code comprehension tasks, including instruction evolution, data filtering, and code review.
arXiv Detail & Related papers (2024-05-29T16:57:33Z) - StarCoder 2 and The Stack v2: The Next Generation [105.93298676368798]
We train StarCoder2 models with 3B, 7B, and 15B parameters on 3.3 to 4.3 trillion tokens.
We thoroughly evaluate them on a comprehensive set of Code LLM benchmarks.
Our large model, StarCoder2- 15B, significantly outperforms other models of comparable size.
arXiv Detail & Related papers (2024-02-29T13:53:35Z) - DolphCoder: Echo-Locating Code Large Language Models with Diverse and
Multi-Objective Instruction Tuning [36.78560777629329]
We introduce a diverse instruction model (DolphCoder) with self-evaluating for code generation.
It learns diverse instruction targets and combines a code evaluation objective to enhance its code generation ability.
Our model achieves superior performance on the HumanEval and MBPP benchmarks.
arXiv Detail & Related papers (2024-02-14T12:34:58Z) - Magicoder: Empowering Code Generation with OSS-Instruct [14.414411313794911]
We introduce Magicoder, a series of fully open-source (code, weights, and data) Large Language Models (LLMs) for code.
Magicoder models are trained on 75K synthetic instruction data using OSS-Instruct.
Both Magicoder and MagicoderS substantially outperform state-of-the-art code models with similar or even larger sizes on a wide range of coding benchmarks.
arXiv Detail & Related papers (2023-12-04T18:50:35Z) - Evaluating Instruction-Tuned Large Language Models on Code Comprehension
and Generation [4.310519298899164]
In this work, we evaluate 10 open-source instructed LLMs on four representative code comprehension and generation tasks.
For the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks.
For the few-shot setting, we find that adding demonstration examples substantially helps instructed LLMs perform better.
arXiv Detail & Related papers (2023-08-02T15:54:22Z) - CodeT5+: Open Code Large Language Models for Code Understanding and
Generation [72.1638273937025]
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence.
CodeT5+ is a family of encoder-decoder LLMs for code in which component modules can be flexibly combined to suit a wide range of downstream code tasks.
We extensively evaluate CodeT5+ on over 20 code-related benchmarks in different settings, including zero-shot, finetuning, and instruction-tuning.
arXiv Detail & Related papers (2023-05-13T14:23:07Z) - StarCoder: may the source be with you! [79.93915935620798]
The BigCode community introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length.
StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories.
arXiv Detail & Related papers (2023-05-09T08:16:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.