A Block-Based Testing Framework for Scratch
- URL: http://arxiv.org/abs/2410.08835v1
- Date: Fri, 11 Oct 2024 14:11:26 GMT
- Title: A Block-Based Testing Framework for Scratch
- Authors: Patric Feldmeier, Gordon Fraser, Ute Heuer, Florian Obermüller, Siegfried Steckenbiller,
- Abstract summary: We introduce a new category of blocks in Scratch that enables the creation of automated tests.
With these blocks, students and teachers alike can create tests and receive feedback directly within the Scratch environment.
- Score: 9.390562437823078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Block-based programming environments like Scratch are widely used in introductory programming courses. They facilitate learning pivotal programming concepts by eliminating syntactical errors, but logical errors that break the desired program behaviour are nevertheless possible. Finding such errors requires testing, i.e., running the program and checking its behaviour. In many programming environments, this step can be automated by providing executable tests as code; in Scratch, testing can only be done manually by invoking events through user input and observing the rendered stage. While this is arguably sufficient for learners, the lack of automated testing may be inhibitive for teachers wishing to provide feedback on their students' solutions. In order to address this issue, we introduce a new category of blocks in Scratch that enables the creation of automated tests. With these blocks, students and teachers alike can create tests and receive feedback directly within the Scratch environment using familiar block-based programming logic. To facilitate the creation and to enable batch processing sets of student solutions, we extend the Scratch user interface with an accompanying test interface. We evaluated this testing framework with 28 teachers who created tests for a popular Scratch game and subsequently used these tests to assess and provide feedback on student implementations. An overall accuracy of 0.93 of teachers' tests compared to manually evaluating the functionality of 21 student solutions demonstrates that teachers are able to create and effectively use tests. A subsequent survey confirms that teachers consider the block-based test approach useful.
Related papers
- Leveraging Large Language Models for Enhancing the Understandability of Generated Unit Tests [4.574205608859157]
We introduce UTGen, which combines search-based software testing and large language models to enhance the understandability of automatically generated test cases.
We observe that participants working on assignments with UTGen test cases fix up to 33% more bugs and use up to 20% less time when compared to baseline test cases.
arXiv Detail & Related papers (2024-08-21T15:35:34Z) - Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors [78.53699244846285]
Large language models (LLMs) present an opportunity to scale high-quality personalized education to all.
LLMs struggle to precisely detect student's errors and tailor their feedback to these errors.
Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions.
arXiv Detail & Related papers (2024-07-12T10:11:40Z) - LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback [71.95402654982095]
We propose Math-Minos, a natural language feedback-enhanced verifier.
Our experiments reveal that a small set of natural language feedback can significantly boost the performance of the verifier.
arXiv Detail & Related papers (2024-06-20T06:42:27Z) - Insights from the Field: Exploring Students' Perspectives on Bad Unit Testing Practices [16.674156958233855]
Students might inadvertently deviate from established unit testing best practices, and introduce problematic code into their test suites.
Students report on the plugin's usefulness in learning about and detecting test smells, they also identify specific test smells that they consider harmless.
We anticipate that our findings will support academia in refining course curricula on unit testing and enabling educators to support students with code review strategies of test code.
arXiv Detail & Related papers (2024-04-15T23:54:45Z) - Using Large Language Models for Student-Code Guided Test Case Generation
in Computer Science Education [2.5382095320488665]
Test cases are an integral part of programming assignments in computer science education.
Test cases can be used as assessment items to test students' programming knowledge and provide personalized feedback on student-written code.
We propose a large language model-based approach to automatically generate test cases.
arXiv Detail & Related papers (2024-02-11T01:37:48Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - Towards Informed Design and Validation Assistance in Computer Games
Using Imitation Learning [65.12226891589592]
This paper proposes a new approach to automated game validation and testing.
Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming.
arXiv Detail & Related papers (2022-08-15T11:08:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Learning by Passing Tests, with Application to Neural Architecture
Search [19.33620150924791]
We propose a novel learning approach called learning by passing tests.
A tester model creates increasingly more-difficult tests to evaluate a learner model.
The learner tries to continuously improve its learning ability so that it can successfully pass however difficult tests created by the tester.
arXiv Detail & Related papers (2020-11-30T18:33:34Z) - Smoke Testing for Machine Learning: Simple Tests to Discover Severe
Defects [7.081604594416339]
We try to determine generic and simple smoke tests that can be used to assert that basic functions can be executed without crashing.
We were able to find bugs in all three machine learning libraries that we tested and severe bugs in two of the three libraries.
arXiv Detail & Related papers (2020-09-03T08:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.