Investigating Student Reasoning in Method-Level Code Refactoring: A Think-Aloud Study
- URL: http://arxiv.org/abs/2410.20875v2
- Date: Tue, 05 Nov 2024 16:29:23 GMT
- Title: Investigating Student Reasoning in Method-Level Code Refactoring: A Think-Aloud Study
- Authors: Eduardo Carneiro Oliveira, Hieke Keuning, Johan Jeuring,
- Abstract summary: Code and code quality are core topics in software engineering education.
Students often produce code with persistent quality issues.
Students were able to remove code quality issues in most cases.
- Score: 0.7120027021375674
- License:
- Abstract: Producing code of good quality is an essential skill in software development. Code quality is an aspect of software quality that concerns the directly observable properties of code, such as decomposition, modularization, and code flow. Code quality can often be improved by means of code refactoring -- an internal change made to code that does not alter its observable behavior. According to the ACM/IEEE-CS/AAAI Computer Science Curricula 2023, code refactoring and code quality are core topics in software engineering education. However, studies show that students often produce code with persistent quality issues. Therefore, it is important to understand what problems students experience when trying to identify and fix code quality issues. In a prior study, we identified a number of student misconceptions in method-level code refactoring. In this paper, we present the findings from a think-aloud study conducted to investigate what students think when working on method-level refactoring exercises. We use grounded theory to identify and classify student reasoning. As a result of the analysis, we identify a set of eight reasons given by students to refactor code, which either concerns the presence of code quality issues, the improvement of software quality attributes, or code semantics. We also analyze which quality issues are identified by students, and to which reasonings these quality issues are related. We found that experienced students reason more often about code quality attributes rather than pointing at a problem they see in the code. Students were able to remove code quality issues in most cases. However, they often overlooked particular issues, such as the presence of a method with multiple responsibilities or the use of a less suitable loop structure.
Related papers
- Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Broken Windows: Exploring the Applicability of a Controversial Theory on Code Quality [13.36825494924134]
We examine whether code history does indeed affect the evolution of code quality.
We check whether developers tailor the quality of their commits based on the quality of the file they commit to.
Our results have implications for both software practice and research.
arXiv Detail & Related papers (2024-10-17T12:16:35Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs [65.2379940117181]
We introduce code prompting, a chain of prompts that transforms a natural language problem into code.
We find that code prompting exhibits a high-performance boost for multiple LLMs.
Our analysis of GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement.
arXiv Detail & Related papers (2024-01-18T15:32:24Z) - Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring [2.021502591596062]
Students must first complete a programming exercise to ensure they will produce a code smell.
This simple intervention is based on the idea that learning is easier if students are familiar with the code.
We conducted a study with 35 novice undergraduates in which they completed various exercises alternately taught using a traditional and our'mistake-based' approach.
arXiv Detail & Related papers (2024-01-02T03:39:19Z) - Automating Source Code Refactoring in the Classroom [15.194527511076725]
This paper discusses the results of an experiment in the that involved carrying out various classroom activities for the purpose of removing antipatterns using Jodorant, an Eclipse plugin that supports antipatterns detection and correction.
The results of the quantitative and qualitative analysis with 171 students show that students tend to appreciate the idea of learning, and are satisfied with various aspects of the JDeodorant plugin's operation.
arXiv Detail & Related papers (2023-11-05T18:46:00Z) - Test Code Refactoring Unveiled: Where and How Does It Affect Test Code
Quality and Effectiveness? [13.911747089737762]
Refactoring has been widely investigated in the past in relation to production code quality.
There is still a lack of investigation into how developers typically test code.
arXiv Detail & Related papers (2023-08-18T13:25:53Z) - CodeReviewer: Pre-Training for Automating Code Review Activities [36.40557768557425]
This research focuses on utilizing pre-training techniques for the tasks in the code review scenario.
We collect a large-scale dataset of real world code changes and code reviews from open-source projects in nine of the most popular programming languages.
To better understand code diffs and reviews, we propose CodeReviewer, a pre-trained model that utilizes four pre-training tasks tailored specifically for the code review senario.
arXiv Detail & Related papers (2022-03-17T05:40:13Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z) - Code Review in the Classroom [57.300604527924015]
Young developers in a classroom setting provide a clear picture of the potential favourable and problematic areas of the code review process.
Their feedback suggests that the process has been well received with some points to better the process.
This paper can be used as guidelines to perform code reviews in the classroom.
arXiv Detail & Related papers (2020-04-19T06:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.