Can a Chatbot Support Exploratory Software Testing? Preliminary Results
- URL: http://arxiv.org/abs/2307.05807v1
- Date: Tue, 11 Jul 2023 21:11:21 GMT
- Title: Can a Chatbot Support Exploratory Software Testing? Preliminary Results
- Authors: Rubens Copche and Yohan Duarte Pessanha and Vinicius Durelli and
Marcelo Medeiros Eler and Andre Takeshi Endo
- Abstract summary: exploratory testing is the de facto approach in agile teams.
This paper presents BotExpTest, designed to support testers while performing exploratory tests of software applications.
We implemented BotExpTest on top of the instant messaging social platform Discord.
Preliminary analyses indicate that BotExpTest may be as effective as similar approaches and help testers to uncover different bugs.
- Score: 0.9249657468385781
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Tests executed by human testers are still widespread in practice and fill the
gap left by limitations of automated approaches. Among the human-centered
approaches, exploratory testing is the de facto approach in agile teams.
Although it is focused on the expertise and creativity of the tester, the
activity of exploratory testing may benefit from support provided by an
automated agent that interacts with the human testers. This paper presents a
chatbot, called BotExpTest, designed to support testers while performing
exploratory tests of software applications. We implemented BotExpTest on top of
the instant messaging social platform Discord; this version includes
functionalities to report bugs and issues, time management of test sessions,
guidelines for app testing, and presentation of exploratory testing strategies.
To assess BotExpTest, we conducted a user study with six software engineering
professionals. They carried out two sessions performing exploratory tests along
with BotExpTest. Participants were capable of revealing bugs and found the
experience to interact with the chatbot positive. Preliminary analyses indicate
that chatbot-enabled exploratory testing may be as effective as similar
approaches and help testers to uncover different bugs. Bots are shown to be
valuable resources for Software Engineering, and initiatives like BotExpTest
may help to improve the effectiveness of testing activities like exploratory
testing.
Related papers
- Unveiling Assumptions: Exploring the Decisions of AI Chatbots and Human Testers [2.5327705116230477]
Decision-making relies on a variety of information, including code, requirements specifications, and other software artifacts.
To fill in the gaps left by unclear information, we often rely on assumptions, intuition, or previous experiences to make decisions.
arXiv Detail & Related papers (2024-06-17T08:55:56Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - MutaBot: A Mutation Testing Approach for Chatbots [3.811067614153878]
MutaBot addresses mutations at multiple levels, including conversational flows, intents, and contexts.
We assess the tool with three Dialogflow chatbots and test cases generated with Botium, revealing weaknesses in the test suites.
arXiv Detail & Related papers (2024-01-18T20:38:27Z) - Evaluating Chatbots to Promote Users' Trust -- Practices and Open
Problems [11.427175278545517]
This paper reviews current practices for testing chatbots.
It identifies gaps as open problems in pursuit of user trust.
It outlines a path forward to mitigate issues of trust related to service or product performance, user satisfaction and long-term unintended consequences for society.
arXiv Detail & Related papers (2023-09-09T22:40:30Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning [62.997667081978825]
Testing video games is an increasingly difficult task as traditional methods fail to scale with growing software systems.
We present SUPERNOVA, a system responsible for test selection and defect prevention while also functioning as an automation hub.
The direct impact of this has been observed to be a reduction in 55% or more testing hours for an undisclosed sports game title.
arXiv Detail & Related papers (2022-03-10T00:47:46Z) - Global Instance Tracking: Locating Target More Like Humans [47.99395323689126]
Target tracking, the essential ability of the human visual system, has been simulated by computer vision tasks.
Existing trackers perform well in austere experimental environments but fail in challenges like occlusion and fast motion.
We propose the global instance tracking (GIT) task, which is supposed to search an arbitrary user-specified instance in a video.
arXiv Detail & Related papers (2022-02-26T06:16:34Z) - Artificial Intelligence in Software Testing : Impact, Problems,
Challenges and Prospect [0.0]
The study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing.
The paper also proposes some key contributions of AI in the future to the domain of software testing.
arXiv Detail & Related papers (2022-01-14T10:21:51Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - Towards Human-Like Automated Test Generation: Perspectives from
Cognition and Problem Solving [13.541347853480705]
We propose a framework based on cognitive science to identify cognitive processes of testers.
Our goal is to be able to mimic how humans create test cases and thus to design more human-like automated test generation systems.
arXiv Detail & Related papers (2021-03-08T13:43:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.