Who Introduces and Who Fixes? Analyzing Code Quality in Collaborative Student's Projects
- URL: http://arxiv.org/abs/2505.14315v1
- Date: Tue, 20 May 2025 13:02:45 GMT
- Title: Who Introduces and Who Fixes? Analyzing Code Quality in Collaborative Student's Projects
- Authors: Rafael Corsi Ferrao, Igor dos Santos Montagner, Rodolfo Azevedo,
- Abstract summary: This paper investigates how errors are introduced and corrected in group projects within an embedded systems course.<n>Students learn code quality rules for C and embedded systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates code quality education by analyzing how errors are introduced and corrected in group projects within an embedded systems course. We identify who introduces errors, who fixes them, and when these actions occur. Students learn code quality rules for C and embedded systems. We address three questions: RQ1: What is the impact of group formation on code quality? RQ2: How do students interact to fix code issues? RQ3: When are issues introduced and resolved? We analyzed data from eight individual labs and two group projects involving 34 students. The course provides continuous, automated feedback on code quality. Findings show that the most active contributors often introduce the most issues. Many issues are fixed late in the project. Individual labs tend to have fewer issues due to their structured nature. Most problems are fixed by the original author, while cross-student fixes take longer, especially in shared code. Critical issues are fixed quickly, but non-critical ones may be ignored, showing a focus on functionality over quality.
Related papers
- CPRet: A Dataset, Benchmark, and Model for Retrieval in Competitive Programming [56.17331530444765]
CPRet is a retrieval-oriented benchmark suite for competitive programming.<n>It covers four retrieval tasks: two code-centric (i.e., Text-to-Code and Code-to-Code) and two newly proposed problem-centric tasks (i.e., Problem-to-Duplicate and Simplified-to-Full)<n>Our contribution includes both high-quality training data and temporally separated test sets for reliable evaluation.
arXiv Detail & Related papers (2025-05-19T10:07:51Z) - BugSpotter: Automated Generation of Code Debugging Exercises [22.204802715829615]
This paper introduces BugSpotter, a tool to generate buggy code from a problem description and verify the synthesized bugs via a test suite.
Students interact with BugSpotter by designing failing test cases, where the buggy code's output differs from the expected result as defined by the problem specification.
arXiv Detail & Related papers (2024-11-21T16:56:33Z) - Understanding Code Understandability Improvements in Code Reviews [79.16476505761582]
We analyzed 2,401 code review comments from Java open-source projects on GitHub.
83.9% of suggestions for improvement were accepted and integrated, with fewer than 1% later reverted.
arXiv Detail & Related papers (2024-10-29T12:21:23Z) - Investigating Student Reasoning in Method-Level Code Refactoring: A Think-Aloud Study [0.7120027021375674]
Code and code quality are core topics in software engineering education.
Students often produce code with persistent quality issues.
Students were able to remove code quality issues in most cases.
arXiv Detail & Related papers (2024-10-28T09:50:16Z) - Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors [78.53699244846285]
Large language models (LLMs) present an opportunity to scale high-quality personalized education to all.
LLMs struggle to precisely detect student's errors and tailor their feedback to these errors.
Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions.
arXiv Detail & Related papers (2024-07-12T10:11:40Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - A Knowledge-Component-Based Methodology for Evaluating AI Assistants [9.412070852474313]
We evaluate an automatic hint generator for CS1 programming assignments powered by GPT-4.
This system provides natural language guidance about how students can improve their incorrect solutions to short programming exercises.
arXiv Detail & Related papers (2024-06-09T00:58:39Z) - Software issues report for bug fixing process: An empirical study of
machine-learning libraries [0.0]
We investigated the effectiveness of issue resolution for bug-fixing processes in six machine-learning libraries.
The most common categories of issues that arise in machine-learning libraries are bugs, documentation, optimization, crashes, enhancement, new feature requests, build/CI, support, and performance.
This study concludes that efficient issue-tracking processes, effective communication, and collaboration are vital for effective resolution of issues and bug fixing processes in machine-learning libraries.
arXiv Detail & Related papers (2023-12-10T21:33:19Z) - Automated Distractor and Feedback Generation for Math Multiple-choice
Questions via In-context Learning [43.83422798569986]
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and reliable form of assessment.
To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers.
We propose a simple, in-context learning-based solution for automated distractor and corresponding feedback message generation.
arXiv Detail & Related papers (2023-08-07T01:03:04Z) - Identifying Different Student Clusters in Functional Programming
Assignments: From Quick Learners to Struggling Students [2.0386745041807033]
We analyze student assignment submission data collected from a functional programming course taught at McGill university.
This allows us to identify four clusters of students: "Quick-learning", "Hardworking", "Satisficing", and "Struggling"
We then analyze how work habits, working duration, the range of errors, and the ability to fix errors impact different clusters of students.
arXiv Detail & Related papers (2023-01-06T17:15:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - The Influence of Domain-Based Preprocessing on Subject-Specific
Clustering [55.41644538483948]
The sudden change of moving the majority of teaching online at Universities has caused an increased amount of workload for academics.
One way to deal with this problem is to cluster these questions depending on their topic.
In this paper, we explore the realms of tagging data sets, focusing on identifying code excerpts and providing empirical results.
arXiv Detail & Related papers (2020-11-16T17:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.