Gamifying a Software Testing Course with Continuous Integration
- URL: http://arxiv.org/abs/2401.17740v1
- Date: Wed, 31 Jan 2024 11:00:16 GMT
- Title: Gamifying a Software Testing Course with Continuous Integration
- Authors: Philipp Straubinger, Gordon Fraser
- Abstract summary: Gamekins is a tool that is seamlessly integrated into the Jenkins continuous integration platform.
Developers can earn points by completing test challenges and quests generated by Gamekins.
We observe a correlation between how students test their code and their use of Gamekins.
- Score: 13.086283144520513
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Testing plays a crucial role in software development, and it is essential for
software engineering students to receive proper testing education. However,
motivating students to write tests and use automated testing during software
development can be challenging. To address this issue and enhance student
engagement in testing when they write code, we propose to incentivize students
to test more by gamifying continuous integration. For this we use Gamekins, a
tool that is seamlessly integrated into the Jenkins continuous integration
platform and uses game elements based on commits to the source code repository:
Developers can earn points by completing test challenges and quests generated
by Gamekins, compete with other developers or teams on a leaderboard, and
receive achievements for their test-related accomplishments. In this paper, we
present our integration of Gamekins into an undergraduate-level course on
software testing. We observe a correlation between how students test their code
and their use of Gamekins, as well as a significant improvement in the accuracy
of their results compared to a previous iteration of the course without
gamification. As a further indicator of how this approach improves testing
behavior, the students reported enjoyment in writing tests with Gamekins.
Related papers
- Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - Gamified GUI testing with Selenium in the IntelliJ IDE: A Prototype Plugin [0.559239450391449]
This paper presents GIPGUT: a prototype of a gamification plugin for IntelliJ IDEA.
The plugin enhances testers' engagement with typically monotonous and tedious tasks through achievements, rewards, and profile customization.
The results indicate high usability and positive reception of the gamification elements.
arXiv Detail & Related papers (2024-03-14T20:11:11Z) - An IDE Plugin for Gamified Continuous Integration [13.086283144520513]
This paper presents an IntelliJ plugin designed to seamlessly integrate Gamekins.
Gamekins integrates challenges, quests, achievements, and leaderboards into the Jenkins CI platform.
As Gamekins is typically accessed through a browser, it introduces a context switch.
arXiv Detail & Related papers (2024-03-06T09:06:07Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - PlayTest: A Gamified Test Generator for Games [11.077232808482128]
Playtest transforms the tiring testing process into a competitive game with a purpose.
We envision the use of Playtest to crowdsource the task of testing games by giving players access to the respective games through our tool in the playtesting phases.
arXiv Detail & Related papers (2023-10-30T10:14:27Z) - Improving Testing Behavior by Gamifying IntelliJ [13.086283144520513]
We introduce IntelliGame, a gamified plugin for the popular IntelliJ Java Integrated Development Environment.
IntelliGame rewards developers for positive testing behavior using a multi-level achievement system.
A controlled experiment with 49 participants reveals substantial differences in the testing behavior triggered by IntelliGame.
arXiv Detail & Related papers (2023-10-17T11:40:55Z) - BDD-Based Framework with RL Integration: An approach for videogames
automated testing [0.0]
Testing in video games differs from traditional software development practices.
We propose the integration of Behavior Driven Development (BDD) with Reinforcement Learning (RL)
arXiv Detail & Related papers (2023-10-08T20:05:29Z) - Technical Challenges of Deploying Reinforcement Learning Agents for Game
Testing in AAA Games [58.720142291102135]
We describe an effort to add an experimental reinforcement learning system to an existing automated game testing solution based on scripted bots.
We show a use-case of leveraging reinforcement learning in game production and cover some of the largest time sinks anyone who wants to make the same journey for their game may encounter.
We propose a few research directions that we believe will be valuable and necessary for making machine learning, and especially reinforcement learning, an effective tool in game production.
arXiv Detail & Related papers (2023-07-19T18:19:23Z) - Learning Deep Semantics for Test Completion [46.842174440120196]
We formalize the novel task of test completion to automatically complete the next statement in a test method based on the context of prior statements and the code under test.
We develop TeCo -- a deep learning model using code semantics for test completion.
arXiv Detail & Related papers (2023-02-20T18:53:56Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.