No Free Lunch: Research Software Testing in Teaching
- URL: http://arxiv.org/abs/2405.11965v1
- Date: Mon, 20 May 2024 11:40:01 GMT
- Title: No Free Lunch: Research Software Testing in Teaching
- Authors: Michael Dorner, Andreas Bauer, Florian Angermeir,
- Abstract summary: This research explores the effects of research software testing integrated into teaching on research software.
In an in-vivo experiment, we integrated the engineering of a test suite for a large-scale network simulation as group projects into a course on software testing at the Blekinge Institute of Technology, Sweden.
We found that the research software benefited from the integration through substantially improved documentation and fewer hardware and software dependencies.
- Score: 1.4396109429521227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software is at the core of most scientific discoveries today. Therefore, the quality of research results highly depends on the quality of the research software. Rigorous testing, as we know it from software engineering in the industry, could ensure the quality of the research software but it also requires a substantial effort that is often not rewarded in academia. Therefore, this research explores the effects of research software testing integrated into teaching on research software. In an in-vivo experiment, we integrated the engineering of a test suite for a large-scale network simulation as group projects into a course on software testing at the Blekinge Institute of Technology, Sweden, and qualitatively measured the effects of this integration on the research software. We found that the research software benefited from the integration through substantially improved documentation and fewer hardware and software dependencies. However, this integration was effortful and although the student teams developed elegant and thoughtful test suites, no code by students went directly into the research software since we were not able to make the integration back into the research software obligatory or even remunerative. Although we strongly believe that integrating research software engineering such as testing into teaching is not only valuable for the research software itself but also for students, the research of the next generation, as they get in touch with research software engineering and bleeding-edge research in their field as part of their education, the uncertainty about the intellectual properties of students' code substantially limits the potential of integrating research software testing into teaching.
Related papers
- An Overview of Quantum Software Engineering in Latin America [36.25707481854301]
This study aims to provide information on the progress, challenges, and opportunities in Quantum Software Engineering in the Latin American context.
By promoting a more in-depth understanding of cutting-edge developments in this burgeoning field, our research aims to serve as a potential stimulus to initiate pioneering initiatives and encourage collaborative efforts among Latin American researchers.
arXiv Detail & Related papers (2024-05-31T07:55:19Z) - Requirements Engineering for Research Software: A Vision [2.2217676348694213]
Most researchers creating software for scientific purposes are not trained in Software Engineering.
Research software is often developed ad hoc without following stringent processes.
We describe how researchers elicit, document, and analyze requirements for research software.
arXiv Detail & Related papers (2024-05-13T14:25:01Z) - Myths and Facts about a Career in Software Testing: A Comparison between
Students' Beliefs and Professionals' Experience [4.748038457227373]
A career in software testing is reported to be unpopular among students in computer science and related areas.
This can potentially create a shortage of testers in the software industry in the future.
This investigation demonstrates that a career in software testing is more exciting and rewarding than students may believe.
arXiv Detail & Related papers (2023-11-10T17:32:41Z) - Introducing High School Students to Version Control, Continuous
Integration, and Quality Assurance [0.0]
Two high school students volunteered in our lab at Wayne State University where I'm a graduate research assistant and Ph.D. student in computer science.
The students had taken AP Computer Science but had no prior experience with software engineering or software testing.
This paper documents our experience devising a group project to teach the requisite software engineering skills to implement automated tests.
arXiv Detail & Related papers (2023-10-05T21:44:11Z) - A pragmatic workflow for research software engineering in computational
science [0.0]
University research groups in Computational Science and Engineering (CSE) generally lack dedicated funding and personnel for Research Software Engineering (RSE)
RSE shifts the focus away from sustainable research software development and reproducible results.
We propose a RSE workflow for CSE that addresses these challenges, that improves the quality of research output in CSE.
arXiv Detail & Related papers (2023-10-02T08:04:12Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - Software must be recognised as an important output of scholarly research [7.776162183510522]
We argue that as well as being important from a methodological perspective, software should be recognised as an output of research.
The article discusses the different roles that software may play in research and highlights the relationship between software and research sustainability.
arXiv Detail & Related papers (2020-11-15T16:34:31Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z) - Improving Reproducibility in Machine Learning Research (A Report from
the NeurIPS 2019 Reproducibility Program) [43.55295847227261]
Reproducibility is obtaining similar results as presented in a paper or talk, using the same code and data (when available)
In 2019, the Neural Information Processing Systems (NeurIPS) conference introduced a program, designed to improve the standards across the community for how we conduct, communicate, and evaluate machine learning research.
In this paper, we describe each of these components, how they were deployed, as well as what we were able to learn from this initiative.
arXiv Detail & Related papers (2020-03-27T02:16:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.