Towards Human-Like Automated Test Generation: Perspectives from
Cognition and Problem Solving
- URL: http://arxiv.org/abs/2103.04749v1
- Date: Mon, 8 Mar 2021 13:43:55 GMT
- Title: Towards Human-Like Automated Test Generation: Perspectives from
Cognition and Problem Solving
- Authors: Eduard Enoiu, Robert Feldt
- Abstract summary: We propose a framework based on cognitive science to identify cognitive processes of testers.
Our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems.
- Score: 13.541347853480705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated testing tools typically create test cases that are different from
what human testers create. This often makes the tools less effective, the
created tests harder to understand, and thus results in tools providing less
support to human testers. Here, we propose a framework based on cognitive
science and, in particular, an analysis of approaches to problem-solving, for
identifying cognitive processes of testers. The framework helps map test design
steps and criteria used in human test activities and thus to better understand
how effective human testers perform their tasks. Ultimately, our goal is to be
able to mimic how humans create test cases and thus to design more human-like
automated test generation systems. We posit that such systems can better
augment and support testers in a way that is meaningful to them.
Related papers
- Disrupting Test Development with AI Assistants [1.024113475677323]
Generative AI-assisted coding tools like GitHub Copilot, ChatGPT, and Tabnine have significantly transformed software development.
This paper analyzes how these innovations impact productivity and software test development metrics.
arXiv Detail & Related papers (2024-11-04T17:52:40Z) - Leveraging Large Language Models for Enhancing the Understandability of Generated Unit Tests [4.574205608859157]
We introduce UTGen, which combines search-based software testing and large language models to enhance the understandability of automatically generated test cases.
We observe that participants working on assignments with UTGen test cases fix up to 33% more bugs and use up to 20% less time when compared to baseline test cases.
arXiv Detail & Related papers (2024-08-21T15:35:34Z) - A Comprehensive Study on Automated Testing with the Software Lifecycle [0.6144680854063939]
The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
arXiv Detail & Related papers (2024-05-02T06:30:37Z) - Survey of Computerized Adaptive Testing: A Machine Learning Perspective [66.26687542572974]
Computerized Adaptive Testing (CAT) provides an efficient and tailored method for assessing the proficiency of examinees.
This paper aims to provide a machine learning-focused survey on CAT, presenting a fresh perspective on this adaptive testing method.
arXiv Detail & Related papers (2024-03-31T15:09:47Z) - Beyond Static Evaluation: A Dynamic Approach to Assessing AI Assistants' API Invocation Capabilities [48.922660354417204]
We propose Automated Dynamic Evaluation (AutoDE) to assess an assistant's API call capability without human involvement.
In our framework, we endeavor to closely mirror genuine human conversation patterns in human-machine interactions.
arXiv Detail & Related papers (2024-03-17T07:34:12Z) - Software Testing and Code Refactoring: A Survey with Practitioners [3.977213079821398]
This study aims to explore how software testing professionals deal with code to understand the benefits and limitations of this practice in the context of software testing.
We concluded that in the context of software testing, offers several benefits, such as supporting the maintenance of automated tests and improving the performance of the testing team.
Our study raises discussions on the importance of having testing professionals implement in the code of automated tests, allowing them to improve their coding abilities.
arXiv Detail & Related papers (2023-10-03T01:07:39Z) - Can a Chatbot Support Exploratory Software Testing? Preliminary Results [0.9249657468385781]
exploratory testing is the de facto approach in agile teams.
This paper presents BotExpTest, designed to support testers while performing exploratory tests of software applications.
We implemented BotExpTest on top of the instant messaging social platform Discord.
Preliminary analyses indicate that BotExpTest may be as effective as similar approaches and help testers to uncover different bugs.
arXiv Detail & Related papers (2023-07-11T21:11:21Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - Towards Informed Design and Validation Assistance in Computer Games
Using Imitation Learning [65.12226891589592]
This paper proposes a new approach to automated game validation and testing.
Our method leverages a data-driven imitation learning technique, which requires little effort and time and no knowledge of machine learning or programming.
arXiv Detail & Related papers (2022-08-15T11:08:44Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.