How do students test software units?
- URL: http://arxiv.org/abs/2102.09368v1
- Date: Tue, 16 Feb 2021 07:02:59 GMT
- Title: How do students test software units?
- Authors: Lex Bijlsma, Niels Doorn, Harrie Passier, Harold Pootjes, Sylvia
Stuurman
- Abstract summary: We asked students to fill in a small survey, to do four exercises and to fill a second survey.
We interviewed eleven students in semi-structured interviews, to obtain more in-depth insight.
One of the misconceptions we found is that most students can only think of test cases based on programming code.
Even if no code was provided (black-box testing), students try to come up with code to base their test cases on.
- Score: 3.6748639131154315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We gained insight into ideas and beliefs on testing of students who finished
an introductory course on programming without any formal education on testing.
We asked students to fill in a small survey, to do four exercises and to fill
in a second survey. We interviewed eleven of these students in semi-structured
interviews, to obtain more in-depth insight. The main outcome is that students
do not test systematically, while most of them think they do test
systematically. One of the misconceptions we found is that most students can
only think of test cases based on programming code. Even if no code was
provided (black-box testing), students try to come up with code to base their
test cases on.
Related papers
- A Block-Based Testing Framework for Scratch [9.390562437823078]
We introduce a new category of blocks in Scratch that enables the creation of automated tests.
With these blocks, students and teachers alike can create tests and receive feedback directly within the Scratch environment.
arXiv Detail & Related papers (2024-10-11T14:11:26Z) - Test Case-Informed Knowledge Tracing for Open-ended Coding Tasks [42.22663501257155]
Open-ended coding tasks are common in computer science education.
Traditional knowledge tracing (KT) models that only analyze response correctness may not fully capture nuances in student knowledge from student code.
We introduce Test case-Informed Knowledge Tracing for Open-ended Coding (TIKTOC), a framework to simultaneously analyze and predict both open-ended student code and whether the code passes each test case.
arXiv Detail & Related papers (2024-09-28T03:13:40Z) - Insights from the Field: Exploring Students' Perspectives on Bad Unit Testing Practices [16.674156958233855]
Students might inadvertently deviate from established unit testing best practices, and introduce problematic code into their test suites.
Students report on the plugin's usefulness in learning about and detecting test smells, they also identify specific test smells that they consider harmless.
We anticipate that our findings will support academia in refining course curricula on unit testing and enabling educators to support students with code review strategies of test code.
arXiv Detail & Related papers (2024-04-15T23:54:45Z) - Using Large Language Models for Student-Code Guided Test Case Generation
in Computer Science Education [2.5382095320488665]
Test cases are an integral part of programming assignments in computer science education.
Test cases can be used as assessment items to test students' programming knowledge and provide personalized feedback on student-written code.
We propose a large language model-based approach to automatically generate test cases.
arXiv Detail & Related papers (2024-02-11T01:37:48Z) - Generating and Evaluating Tests for K-12 Students with Language Model
Simulations: A Case Study on Sentence Reading Efficiency [45.6224547703717]
This study focuses on tests of silent sentence reading efficiency, used to assess students' reading ability over time.
We propose to fine-tune large language models (LLMs) to simulate how previous students would have responded to unseen items.
We show the generated tests closely correspond to the original test's difficulty and reliability based on crowdworker responses.
arXiv Detail & Related papers (2023-10-10T17:59:51Z) - Test case quality: an empirical study on belief and evidence [8.475270520855332]
We investigate eight hypotheses regarding what constitutes a good test case.
Despite our best efforts, we were unable to find evidence that supports these beliefs.
arXiv Detail & Related papers (2023-07-12T19:02:48Z) - Learning Deep Semantics for Test Completion [46.842174440120196]
We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test.
We develop TeCo -- a deep learning model using code semantics for test completion.
arXiv Detail & Related papers (2023-02-20T18:53:56Z) - Write a Line: Tests with Answer Templates and String Completion Hints
for Self-Learning in a CS1 Course [0.0]
This paper reports the results of using regular-expression-based questions with string completion hints in a CS1 course for 4 years with 497 students.
The evaluation results show that Perl-compatible regular expressions provide good precision and recall (more than 99%) when used for questions requiring writing a single line of code.
arXiv Detail & Related papers (2022-04-19T17:53:35Z) - Continuous Examination by Automatic Quiz Assessment Using Spiral Codes
and Image Processing [69.35569554213679]
Paper quizzes are affordable and within reach of campus education in classrooms.
correction of the quiz is a considerable obstacle.
We suggest mitigating the issue by a novel image processing technique.
arXiv Detail & Related papers (2022-01-26T22:58:15Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Measuring Coding Challenge Competence With APPS [54.22600767666257]
We introduce APPS, a benchmark for code generation.
Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges.
Recent models such as GPT-Neo can pass approximately 15% of the test cases of introductory problems.
arXiv Detail & Related papers (2021-05-20T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.