TestLab: An Intelligent Automated Software Testing Framework
- URL: http://arxiv.org/abs/2306.03602v1
- Date: Tue, 6 Jun 2023 11:45:22 GMT
- Title: TestLab: An Intelligent Automated Software Testing Framework
- Authors: Tiago Dias, Arthur Batista, Eva Maia and Isabel Pra\c{c}a
- Abstract summary: TestLab is an automated software testing framework that attempts to gather a set of testing methods and automate them using Artificial Intelligence.
The first two modules aim to identify vulnerabilities from different perspectives, while the third module enhances traditional automated software testing by automatically generating test cases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevalence of software systems has become an integral part of modern-day
living. Software usage has increased significantly, leading to its growth in
both size and complexity. Consequently, software development is becoming a more
time-consuming process. In an attempt to accelerate the development cycle, the
testing phase is often neglected, leading to the deployment of flawed systems
that can have significant implications on the users daily activities. This work
presents TestLab, an intelligent automated software testing framework that
attempts to gather a set of testing methods and automate them using Artificial
Intelligence to allow continuous testing of software systems at multiple levels
from different scopes, ranging from developers to end-users. The tool consists
of three modules, each serving a distinct purpose. The first two modules aim to
identify vulnerabilities from different perspectives, while the third module
enhances traditional automated software testing by automatically generating
test cases through source code analysis.
Related papers
- Disrupting Test Development with AI Assistants [1.024113475677323]
Generative AI-assisted coding tools like GitHub Copilot, ChatGPT, and Tabnine have significantly transformed software development.
This paper analyzes how these innovations impact productivity and software test development metrics.
arXiv Detail & Related papers (2024-11-04T17:52:40Z) - AutoPT: How Far Are We from the End2End Automated Web Penetration Testing? [54.65079443902714]
We introduce AutoPT, an automated penetration testing agent based on the principle of PSM driven by LLMs.
Our results show that AutoPT outperforms the baseline framework ReAct on the GPT-4o mini model.
arXiv Detail & Related papers (2024-11-02T13:24:30Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - A System for Automated Unit Test Generation Using Large Language Models and Assessment of Generated Test Suites [1.4563527353943984]
Large Language Models (LLMs) have been applied to various aspects of software development.
We present AgoneTest: an automated system for generating test suites for Java projects.
arXiv Detail & Related papers (2024-08-14T23:02:16Z) - Harnessing the Power of LLMs: Automating Unit Test Generation for High-Performance Computing [7.3166218350585135]
Unit testing is crucial in software engineering for ensuring quality.
It's not widely used in parallel and high-performance computing software, particularly scientific applications.
We propose an automated method for generating unit tests for such software.
arXiv Detail & Related papers (2024-07-06T22:45:55Z) - A Comprehensive Study on Automated Testing with the Software Lifecycle [0.6144680854063939]
The research examines how automated testing makes it easier to evaluate software quality, how it saves time as compared to manual testing, and how it differs from each of them in terms of benefits and drawbacks.
The process of testing software applications is simplified, customized to certain testing situations, and can be successfully carried out by using automated testing tools.
arXiv Detail & Related papers (2024-05-02T06:30:37Z) - DevBench: A Comprehensive Benchmark for Software Development [72.24266814625685]
DevBench is a benchmark that evaluates large language models (LLMs) across various stages of the software development lifecycle.
Empirical studies show that current LLMs, including GPT-4-Turbo, fail to solve the challenges presented within DevBench.
Our findings offer actionable insights for the future development of LLMs toward real-world programming applications.
arXiv Detail & Related papers (2024-03-13T15:13:44Z) - Constraint-Guided Test Execution Scheduling: An Experience Report at ABB
Robotics [13.50507740574158]
We present the results of a project called DynTest whose goal is to automate the scheduling of test execution from a large test repository.
This paper reports on our experience and lessons learned for successfully transferring constraint-based optimization models for test execution scheduling at ABB Robotics.
arXiv Detail & Related papers (2023-06-02T13:29:32Z) - MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation [104.48766162008815]
We propose and explore a new multi-modal extension of test-time adaptation for 3D semantic segmentation.
To design a framework that can take full advantage of multi-modality, each modality provides regularized self-supervisory signals to other modalities.
Our regularized pseudo labels produce stable self-learning signals in numerous multi-modal test-time adaptation scenarios.
arXiv Detail & Related papers (2022-04-27T02:28:12Z) - Integrated Benchmarking and Design for Reproducible and Accessible
Evaluation of Robotic Agents [61.36681529571202]
We describe a new concept for reproducible robotics research that integrates development and benchmarking.
One of the central components of this setup is the Duckietown Autolab, a standardized setup that is itself relatively low-cost and reproducible.
We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs.
arXiv Detail & Related papers (2020-09-09T15:31:29Z) - Beyond Accuracy: Behavioral Testing of NLP models with CheckList [66.42971817954806]
CheckList is a task-agnostic methodology for testing NLP models.
CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation.
In a user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
arXiv Detail & Related papers (2020-05-08T15:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.