On the Use of Static Analysis to Engage Students with Software Quality
Improvement: An Experience with PMD
- URL: http://arxiv.org/abs/2302.05554v2
- Date: Thu, 13 Jul 2023 12:08:45 GMT
- Title: On the Use of Static Analysis to Engage Students with Software Quality
Improvement: An Experience with PMD
- Authors: Eman Abdullah AlOmar, Salma Abdullah AlOmar, Mohamed Wiem Mkaouer
- Abstract summary: We aim to reflect on our experience with teaching the use of static analysis for the purpose of evaluating its effectiveness in helping students with respect to improving software quality.
This paper discusses the results of an experiment in the classroom over a period of 3 academic semesters, involving 65 submissions that carried out code review activity of 690 rules using PMD.
- Score: 12.961585735468313
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Static analysis tools are frequently used to scan the source code and detect
deviations from the project coding guidelines. Given their importance, linters
are often introduced to classrooms to educate students on how to detect and
potentially avoid these code anti-patterns. However, little is known about
their effectiveness in raising students awareness, given that these linters
tend to generate a large number of false positives. To increase the awareness
of potential coding issues that violate coding standards, in this paper, we aim
to reflect on our experience with teaching the use of static analysis for the
purpose of evaluating its effectiveness in helping students with respect to
improving software quality. This paper discusses the results of an experiment
in the classroom over a period of 3 academic semesters, involving 65
submissions that carried out code review activity of 690 rules using PMD. The
results of the quantitative and qualitative analysis shows that the presence of
a set of PMD quality issues influence the acceptance or rejection of the
issues, design, and best practices-related categories that take a longer time
to be resolved, and students acknowledge the potential of using static analysis
tools during code review. Through this experiment, code review can turn into a
vital part of the educational computing plan. We envision our findings enabling
educators to support students with code review strategies to raise students
awareness about static analysis tools and scaffolding their coding skills.
Related papers
- Teaching Well-Structured Code: A Literature Review of Instructional Approaches [2.389598109913754]
This systematic literature review identifies existing instructional approaches, their objectives, and the strategies used for measuring their effectiveness.
We classified these studies into three categories: (1) studies focused on developing or evaluating automated tools and their usage, (2) studies discussing other instructional materials, and (3) studies discussing how to integrate code structure into the curriculum through a holistic approach to course design to support code quality.
arXiv Detail & Related papers (2025-02-16T18:51:22Z) - Chatbots im Schulunterricht: Wir testen das Fobizz-Tool zur automatischen Bewertung von Hausaufgaben [0.0]
This study examines the AI-powered grading tool "AI Grading Assistant" by the German company Fobizz.
The tool's numerical grades and qualitative feedback are often random and do not improve even when its suggestions are incorporated.
The study critiques the broader trend of adopting AI as a quick fix for systemic problems in education.
arXiv Detail & Related papers (2024-12-09T16:50:02Z) - Evaluation of Systems Programming Exercises through Tailored Static Analysis [4.335676282295717]
In large programming classes, it takes a significant effort from teachers to evaluate exercises and provide detailed feedback.
In systems, test cases are not sufficient to assess exercises, since detailed programming and resource management bugs are difficult to reproduce.
This paper presents an experience report on static analysis for the automatic evaluation of systems programming exercises.
arXiv Detail & Related papers (2024-10-06T10:56:29Z) - Predicting Expert Evaluations in Software Code Reviews [8.012861163935904]
This paper presents an algorithmic model that automates aspects of code review typically avoided due to their complexity or subjectivity.
Instead of replacing manual reviews, our model adds insights that help reviewers focus on more impactful tasks.
arXiv Detail & Related papers (2024-09-23T16:01:52Z) - Leveraging Large Language Models for Efficient Failure Analysis in Game Development [47.618236610219554]
This paper proposes a new approach to automatically identify which change in the code caused a test to fail.
The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure.
Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year.
arXiv Detail & Related papers (2024-06-11T09:21:50Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - Active Teacher for Semi-Supervised Object Detection [80.10937030195228]
We propose a novel algorithm called Active Teacher for semi-supervised object detection (SSOD)
Active Teacher extends the teacher-student framework to an iterative version, where the label set is partially and gradually augmented by evaluating three key factors of unlabeled examples.
With this design, Active Teacher can maximize the effect of limited label information while improving the quality of pseudo-labels.
arXiv Detail & Related papers (2023-03-15T03:59:27Z) - An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder [1.329950749508442]
This paper studies the answers to the standardized university evaluation questionnaires of foundational computer science courses which recently introduced autograding.
We hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty.
The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.
arXiv Detail & Related papers (2021-10-28T14:09:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Code Review in the Classroom [57.300604527924015]
Young developers in a classroom setting provide a clear picture of the potential favourable and problematic areas of the code review process.
Their feedback suggests that the process has been well received with some points to better the process.
This paper can be used as guidelines to perform code reviews in the classroom.
arXiv Detail & Related papers (2020-04-19T06:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.