Teaching Well-Structured Code: A Literature Review of Instructional Approaches
- URL: http://arxiv.org/abs/2502.11230v1
- Date: Sun, 16 Feb 2025 18:51:22 GMT
- Title: Teaching Well-Structured Code: A Literature Review of Instructional Approaches
- Authors: Sara Nurollahian, Hieke Keuning, Eliane Wiese,
- Abstract summary: This systematic literature review identifies existing instructional approaches, their objectives, and the strategies used for measuring their effectiveness.
We classified these studies into three categories: (1) studies focused on developing or evaluating automated tools and their usage, (2) studies discussing other instructional materials, and (3) studies discussing how to integrate code structure into the curriculum through a holistic approach to course design to support code quality.
- Score: 2.389598109913754
- License:
- Abstract: Teaching the software engineers of the future to write high-quality code with good style and structure is important. This systematic literature review identifies existing instructional approaches, their objectives, and the strategies used for measuring their effectiveness. Building on an existing mapping study of code quality in education, we identified 53 papers on code structure instruction. We classified these studies into three categories: (1) studies focused on developing or evaluating automated tools and their usage (e.g., code analyzers, tutors, and refactoring tools), (2) studies discussing other instructional materials, such as learning resources (e.g., refactoring lessons and activities), rubrics, and catalogs of violations, and (3) studies discussing how to integrate code structure into the curriculum through a holistic approach to course design to support code quality. While most approaches use analyzers that point students to problems in their code, incorporating these tools into classrooms is not straightforward. Combined with further research on code structure instruction in the classroom, we call for more studies on effectiveness. Over 40% of instructional studies had no evaluation. Many studies show promise for their interventions by demonstrating improvement in student performance (e.g., reduced violations in student code when using the intervention compared with code that was written without access to the intervention). These interventions warrant further investigation on learning, to see how students apply their knowledge after the instructional supports are removed.
Related papers
- One Step at a Time: Combining LLMs and Static Analysis to Generate Next-Step Hints for Programming Tasks [5.069252018619403]
Students often struggle with solving programming problems when learning to code, especially when they have to do it online.
This help can be provided as next-step hint generation, showing a student what specific small step they need to do next to get to the correct solution.
We propose a novel system to provide both textual and code hints for programming tasks.
arXiv Detail & Related papers (2024-10-11T21:41:57Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - Tool Learning with Large Language Models: A Survey [60.733557487886635]
Tool learning with large language models (LLMs) has emerged as a promising paradigm for augmenting the capabilities of LLMs to tackle highly complex problems.
Despite growing attention and rapid advancements in this field, the existing literature remains fragmented and lacks systematic organization.
arXiv Detail & Related papers (2024-05-28T08:01:26Z) - Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models [95.96734086126469]
Large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.
For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work.
We perform a detailed coarse-to-fine analysis of the inference performance of various code libraries.
arXiv Detail & Related papers (2024-04-17T15:57:50Z) - Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks [0.923607423080658]
This paper describes instruments and the machine learning models used for validating them.
We have used the data collected in an introductory programming course in the penultimate week of the semester.
Preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory.
arXiv Detail & Related papers (2024-04-03T05:07:01Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Deep Learning Based Code Generation Methods: Literature Review [30.17038624027751]
This paper focuses on Code Generation task that aims at generating relevant code fragments according to given natural language descriptions.
In this paper, we systematically review the current work on deep learning-based code generation methods.
arXiv Detail & Related papers (2023-03-02T08:25:42Z) - On the Use of Static Analysis to Engage Students with Software Quality
Improvement: An Experience with PMD [12.961585735468313]
We aim to reflect on our experience with teaching the use of static analysis for the purpose of evaluating its effectiveness in helping students with respect to improving software quality.
This paper discusses the results of an experiment in the classroom over a period of 3 academic semesters, involving 65 submissions that carried out code review activity of 690 rules using PMD.
arXiv Detail & Related papers (2023-02-11T00:21:04Z) - An Analysis of Programming Course Evaluations Before and After the
Introduction of an Autograder [1.329950749508442]
This paper studies the answers to the standardized university evaluation questionnaires of foundational computer science courses which recently introduced autograding.
We hypothesize how the autograder might have contributed to the significant changes in the data, such as, improved interactions between tutors and students, improved overall course quality, improved learning success, increased time spent, and reduced difficulty.
The autograder technology can be validated as a teaching method to improve student satisfaction with programming courses.
arXiv Detail & Related papers (2021-10-28T14:09:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Curriculum Learning for Reinforcement Learning Domains: A Framework and
Survey [53.73359052511171]
Reinforcement learning (RL) is a popular paradigm for addressing sequential decision tasks in which the agent has only limited environmental feedback.
We present a framework for curriculum learning (CL) in RL, and use it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
arXiv Detail & Related papers (2020-03-10T20:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.