Curriculum Learning for Small Code Language Models
- URL: http://arxiv.org/abs/2407.10194v1
- Date: Sun, 14 Jul 2024 13:32:24 GMT
- Title: Curriculum Learning for Small Code Language Models
- Authors: Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, Riyadh Baghdadi,
- Abstract summary: This paper explores the potential of curriculum learning in enhancing the performance of code language models.
We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models.
- Score: 0.09999629695552192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Code language models have emerged as useful tools for various programming tasks, yet they often struggle when it comes to complex ones. In this paper, we explore the potential of curriculum learning in enhancing the performance of these models. While prior research has suggested that curriculum learning does not necessarily help in improving the performance of language models, our results surprisingly show that this may not be the case for code language models. We demonstrate that a well-designed curriculum learning approach significantly improves the accuracy of small decoder-only code language models on the task of code execution, while its effect on code completion is less significant. To explore the potential of curriculum learning, we train multiple GPT models with 1 million parameters each to predict the next token and evaluate them on code completion and execution tasks. Our contributions include proposing a novel code difficulty assessment metric by combining software code measures, investigating the effectiveness of Curriculum Learning for code language models, and introducing a Novel Curriculum Learning schedule that enhances the performance of small decoder-only language models in code execution tasks. The results of this paper open the door for more research on the use of curriculum learning for code language models.
Related papers
- Large Language Models in Computer Science Education: A Systematic Literature Review [7.240148550817106]
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP)
Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL)
arXiv Detail & Related papers (2024-10-21T17:49:50Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Do Machines and Humans Focus on Similar Code? Exploring Explainability
of Large Language Models in Code Summarization [10.201463330812167]
We report negative results from our investigation of explainability of language models in code summarization through the lens of human comprehension.
We employ a state-of-the-art model-agnostic, black-box, perturbation-based approach, SHAP, to identify which code tokens influence that generation of summaries.
Our study highlights an inability to align human focus with SHAP-based model focus measures.
arXiv Detail & Related papers (2024-02-22T00:01:02Z) - Code Representation Learning At Scale [75.04686476303436]
We fuel code representation learning with a vast amount of code data via a two-stage pretraining scheme.
We first train the encoders via a mix that leverages both randomness in masking language modeling and the structure aspect of programming language.
We then enhance the representations via contrastive learning with hard negative and hard positive constructed in an unsupervised manner.
arXiv Detail & Related papers (2024-02-02T22:19:15Z) - L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models [102.00201523306986]
We present L2CEval, a systematic evaluation of the language-to-code generation capabilities of large language models (LLMs)
We analyze the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods.
In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs.
arXiv Detail & Related papers (2023-09-29T17:57:00Z) - Active Code Learning: Benchmarking Sample-Efficient Training of Code
Models [35.54965391159943]
In software engineering (ML4Code), efficiently training models of code with less human effort has become an emergent problem.
Active learning is such a technique that allows developers to train a model with reduced data while producing models with desired performance.
This paper builds the first benchmark to study this critical problem - active code learning.
arXiv Detail & Related papers (2023-06-02T03:26:11Z) - Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - Enriching Source Code with Contextual Data for Code Completion Models:
An Empirical Study [4.438873396405334]
We aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion.
For comments, we find that the models perform better in the presence of multi-line comments.
arXiv Detail & Related papers (2023-04-24T17:09:14Z) - What Makes Good In-context Demonstrations for Code Intelligence Tasks
with LLMs? [60.668318972782295]
Large language models have shown the ability of in-context learning (ICL)
ICL employs task instructions and a few examples as demonstrations, and then inputs the demonstrations to the language models for making predictions.
It is important to systematically investigate how to construct a good demonstration for code-related tasks.
arXiv Detail & Related papers (2023-04-15T15:13:58Z) - Automatic Generation of Programming Exercises and Code Explanations with
Large Language Models [4.947560475228859]
OpenAI Codex is a recent large language model from the GPT-3 family for translating code into natural language.
We explore the natural language generation capabilities of Codex in two different phases of the life of a programming exercise.
We find the majority of this automatically generated content both novel and sensible, and in many cases ready to use as is.
arXiv Detail & Related papers (2022-06-03T11:00:43Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.