Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring
- URL: http://arxiv.org/abs/2401.01011v1
- Date: Tue, 2 Jan 2024 03:39:19 GMT
- Title: Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring
- Authors: Ivan Tan, Christopher M. Poskitt
- Abstract summary: Students must first complete a programming exercise to ensure they will produce a code smell.
This simple intervention is based on the idea that learning is easier if students are familiar with the code.
We conducted a study with 35 novice undergraduates in which they completed various exercises alternately taught using a traditional and our'mistake-based' approach.
- Score: 2.021502591596062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Programming problems can be solved in a multitude of functionally correct
ways, but the quality of these solutions (e.g. readability, maintainability)
can vary immensely. When code quality is poor, symptoms emerge in the form of
'code smells', which are specific negative characteristics (e.g. duplicate
code) that can be resolved by applying refactoring patterns. Many undergraduate
computing curricula train students on this software engineering practice, often
doing so via exercises on unfamiliar instructor-provided code. Our observation,
however, is that this makes it harder for novices to internalise refactoring as
part of their own development practices. In this paper, we propose a new
approach to teaching refactoring, in which students must first complete a
programming exercise constrained to ensure they will produce a code smell. This
simple intervention is based on the idea that learning refactoring is easier if
students are familiar with the code (having built it), that it brings
refactoring closer to their regular development practice, and that it presents
a powerful opportunity to learn from a 'mistake'. We designed and conducted a
study with 35 novice undergraduates in which they completed various refactoring
exercises alternately taught using a traditional and our 'mistake-based'
approach, finding that students were significantly more effective and confident
at completing exercises using the latter.
Related papers
- Insights into Deep Learning Refactoring: Bridging the Gap Between Practices and Expectations [13.084553746852382]
Deep learning software has become progressively complex as the software evolves.
The insight of code in the context of deep learning is still unclear.
Research and the development of related tools are crucial for improving project maintainability and code quality.
arXiv Detail & Related papers (2024-05-08T07:35:14Z) - A Survey of Deep Learning Based Software Refactoring [5.716522445049744]
Dozens of deep learning-based approaches have been proposed forfactoring software.
There is a lack of comprehensive reviews on such works as well as a taxonomy for deep learning-based approaches.
Most of the deep learning techniques have been used for the detection of code smells and the recommendation of solutions.
arXiv Detail & Related papers (2024-04-30T03:07:11Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - Automating Source Code Refactoring in the Classroom [15.194527511076725]
This paper discusses the results of an experiment in the that involved carrying out various classroom activities for the purpose of removing antipatterns using Jodorant, an Eclipse plugin that supports antipatterns detection and correction.
The results of the quantitative and qualitative analysis with 171 students show that students tend to appreciate the idea of learning, and are satisfied with various aspects of the JDeodorant plugin's operation.
arXiv Detail & Related papers (2023-11-05T18:46:00Z) - Empirical Evaluation of a Live Environment for Extract Method
Refactoring [0.0]
We developed a Live Refactoring Environment that visually identifies, recommends, and applies Extract Methods.
Our results were significantly different and better than the ones from the code manually without further help.
arXiv Detail & Related papers (2023-07-20T16:36:02Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - Coder Reviewer Reranking for Code Generation [56.80381384717]
We propose Coder-Reviewer reranking as a method for sampling diverse programs from a code language model and reranking with model likelihood.
Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement over reranking with the Coder model only.
Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyper parameters.
arXiv Detail & Related papers (2022-11-29T18:56:33Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - How We Refactor and How We Document it? On the Use of Supervised Machine
Learning Algorithms to Classify Refactoring Documentation [25.626914797750487]
Refactoring is the art of improving the design of a system without altering its external behavior.
This study categorizes commits into 3 categories, namely, Internal QA, External QA, and Code Smell Resolution, along with the traditional BugFix and Functional categories.
To better understand our classification results, we analyzed commit messages to extract patterns that developers regularly use to describe their smells.
arXiv Detail & Related papers (2020-10-26T20:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.