Improving students' code correctness and test completeness by informal
specifications
- URL: http://arxiv.org/abs/2309.02221v1
- Date: Tue, 5 Sep 2023 13:24:43 GMT
- Title: Improving students' code correctness and test completeness by informal
specifications
- Authors: Arno Broeders and Ruud Hermans and Sylvia Stuurman and Lex Bijlsma and
Harrie Passier
- Abstract summary: How to teach students to develop good quality software has long been a topic in computer science education and research.
Several attempts have been made to teach students to write specifications before writing code.
In this paper we focus on the use of informal specifications.
- Score: 1.2599533416395765
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The quality of software produced by students is often poor. How to teach
students to develop good quality software has long been a topic in computer
science education and research. We must conclude that we still do not have a
good answer to this question. Specifications are necessary to determine the
correctness of software, to develop error-free software and to write complete
tests. Several attempts have been made to teach students to write
specifications before writing code. So far, that has not proven to be very
successful: Students do not like to write a specification and do not see the
benefits of writing specifications. In this paper we focus on the use of
informal specifications. Instead of teaching students how to write
specifications, we teach them how to use informal specifications to develop
correct software. The results were surprising: the number of errors in software
and the completeness of tests both improved considerably and, most importantly,
students really appreciate the specifications. We think that if students
appreciate specification, we have a key to teach them how to specify and to
appreciate its value.
Related papers
- Handwritten Code Recognition for Pen-and-Paper CS Education [33.53124589437863]
Teaching Computer Science (CS) by having students write programs by hand on paper has key pedagogical advantages.
However, a key obstacle is the current lack of teaching methods and support software for working with and running handwritten programs.
Our approach integrates two innovative methods. The first combines OCR with an indentation recognition module and a language model designed for post-OCR error correction without introducing hallucinations.
arXiv Detail & Related papers (2024-08-07T21:02:17Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Using Large Language Models for Student-Code Guided Test Case Generation
in Computer Science Education [2.5382095320488665]
Test cases are an integral part of programming assignments in computer science education.
Test cases can be used as assessment items to test students' programming knowledge and provide personalized feedback on student-written code.
We propose a large language model-based approach to automatically generate test cases.
arXiv Detail & Related papers (2024-02-11T01:37:48Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Automatic Assessment of the Design Quality of Student Python and Java
Programs [0.0]
We propose a rule-based system that assesses student programs for quality of design of and provides personalized, precise feedback on how to improve their work.
The students benefited from the system and the rate of design quality flaws dropped 47.84% on average over 4 different assignments, 2 in Python and 2 in Java, in comparison to the previous 2 to 3 years of student submissions.
arXiv Detail & Related papers (2022-08-22T06:04:10Z) - Data-Driven Approach for Log Instruction Quality Assessment [59.04636530383049]
There are no widely adopted guidelines on how to write log instructions with good quality properties.
We identify two quality properties: 1) correct log level assignment assessing the correctness of the log level, and 2) sufficient linguistic structure assessing the minimal richness of the static text necessary for verbose event description.
Our approach correctly assesses log level assignments with an accuracy of 0.88, and the sufficient linguistic structure with an F1 score of 0.99, outperforming the baselines.
arXiv Detail & Related papers (2022-04-06T07:02:23Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Deep Learning Models in Software Requirements Engineering [0.0]
We have applied the vanilla sentence autoencoder to the sentence generation task and evaluated its performance.
The generated sentences are not plausible English and contain only a few meaningful words.
We believe that applying the model to a larger dataset may produce significantly better results.
arXiv Detail & Related papers (2021-05-17T12:27:30Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z) - SongNet: Rigid Formats Controlled Text Generation [51.428634666559724]
We propose a simple and elegant framework named SongNet to tackle this problem.
The backbone of the framework is a Transformer-based auto-regressive language model.
A pre-training and fine-tuning framework is designed to further improve the generation quality.
arXiv Detail & Related papers (2020-04-17T01:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.