Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring
- URL: http://arxiv.org/abs/2401.01011v1
- Date: Tue, 2 Jan 2024 03:39:19 GMT
- Title: Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring
- Authors: Ivan Tan, Christopher M. Poskitt
- Abstract summary: Students must first complete a programming exercise to ensure they will produce a code smell.
This simple intervention is based on the idea that learning is easier if students are familiar with the code.
We conducted a study with 35 novice undergraduates in which they completed various exercises alternately taught using a traditional and our'mistake-based' approach.
- Score: 2.021502591596062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Programming problems can be solved in a multitude of functionally correct
ways, but the quality of these solutions (e.g. readability, maintainability)
can vary immensely. When code quality is poor, symptoms emerge in the form of
'code smells', which are specific negative characteristics (e.g. duplicate
code) that can be resolved by applying refactoring patterns. Many undergraduate
computing curricula train students on this software engineering practice, often
doing so via exercises on unfamiliar instructor-provided code. Our observation,
however, is that this makes it harder for novices to internalise refactoring as
part of their own development practices. In this paper, we propose a new
approach to teaching refactoring, in which students must first complete a
programming exercise constrained to ensure they will produce a code smell. This
simple intervention is based on the idea that learning refactoring is easier if
students are familiar with the code (having built it), that it brings
refactoring closer to their regular development practice, and that it presents
a powerful opportunity to learn from a 'mistake'. We designed and conducted a
study with 35 novice undergraduates in which they completed various refactoring
exercises alternately taught using a traditional and our 'mistake-based'
approach, finding that students were significantly more effective and confident
at completing exercises using the latter.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Investigating Student Reasoning in Method-Level Code Refactoring: A Think-Aloud Study [0.7120027021375674]
Code and code quality are core topics in software engineering education.
Students often produce code with persistent quality issues.
Students were able to remove code quality issues in most cases.
arXiv Detail & Related papers (2024-10-28T09:50:16Z) - Sifting through the Chaff: On Utilizing Execution Feedback for Ranking the Generated Code Candidates [46.74037090843497]
Large Language Models (LLMs) are transforming the way developers approach programming by automatically generating code based on natural language descriptions.
This paper puts forward RankEF, an innovative approach for code ranking that leverages execution feedback.
Experiments on three code generation benchmarks demonstrate that RankEF significantly outperforms the state-of-the-art CodeRanker.
arXiv Detail & Related papers (2024-08-26T01:48:57Z) - Insights into Deep Learning Refactoring: Bridging the Gap Between Practices and Expectations [13.084553746852382]
Deep learning software has become progressively complex as the software evolves.
The insight of code in the context of deep learning is still unclear.
Research and the development of related tools are crucial for improving project maintainability and code quality.
arXiv Detail & Related papers (2024-05-08T07:35:14Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - Automating Source Code Refactoring in the Classroom [15.194527511076725]
This paper discusses the results of an experiment in the that involved carrying out various classroom activities for the purpose of removing antipatterns using Jodorant, an Eclipse plugin that supports antipatterns detection and correction.
The results of the quantitative and qualitative analysis with 171 students show that students tend to appreciate the idea of learning, and are satisfied with various aspects of the JDeodorant plugin's operation.
arXiv Detail & Related papers (2023-11-05T18:46:00Z) - Empirical Evaluation of a Live Environment for Extract Method
Refactoring [0.0]
We developed a Live Refactoring Environment that visually identifies, recommends, and applies Extract Methods.
Our results were significantly different and better than the ones from the code manually without further help.
arXiv Detail & Related papers (2023-07-20T16:36:02Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - How We Refactor and How We Document it? On the Use of Supervised Machine
Learning Algorithms to Classify Refactoring Documentation [25.626914797750487]
Refactoring is the art of improving the design of a system without altering its external behavior.
This study categorizes commits into 3 categories, namely, Internal QA, External QA, and Code Smell Resolution, along with the traditional BugFix and Functional categories.
To better understand our classification results, we analyzed commit messages to extract patterns that developers regularly use to describe their smells.
arXiv Detail & Related papers (2020-10-26T20:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.