Provengo: A Tool Suite for Scenario Driven Model-Based Testing
- URL: http://arxiv.org/abs/2308.15938v1
- Date: Wed, 30 Aug 2023 10:34:12 GMT
- Title: Provengo: A Tool Suite for Scenario Driven Model-Based Testing
- Authors: Michael Bar-Sinai, Achiya Elyasaf, Gera Weiss and Yeshayahu Weiss
- Abstract summary: Provengo is a suite of tools designed to facilitate the implementation of Scenario-Driven Model-Based Testing (SDMBT)
With Provengo, testers gain the ability to effortlessly create natural user stories and seamlessly integrate them into a model capable of generating effective tests.
- Score: 2.4387555567462647
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Provengo, a comprehensive suite of tools designed to facilitate
the implementation of Scenario-Driven Model-Based Testing (SDMBT), an
innovative approach that utilizes scenarios to construct a model encompassing
the user's perspective and the system's business value while also defining the
desired outcomes. With the assistance of Provengo, testers gain the ability to
effortlessly create natural user stories and seamlessly integrate them into a
model capable of generating effective tests. The demonstration illustrates how
SDMBT effectively addresses the bootstrapping challenge commonly encountered in
model-based testing (MBT) by enabling incremental development, starting from
simple models and gradually augmenting them with additional stories.
Related papers
- MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Variational Exploration Module VEM: A Cloud-Native Optimization and
Validation Tool for Geospatial Modeling and AI Workflows [0.0]
Cloud-based deployments help to scale up these modeling and AI.
We have developed the Variational Exploration Module which facilitates the optimization and validation of modeling deployed in the cloud.
The flexibility and robustness of the model-agnostic module is demonstrated using real-world applications.
arXiv Detail & Related papers (2023-11-26T23:07:00Z) - TEA: Test-time Energy Adaptation [67.4574269851666]
Test-time adaptation (TTA) aims to improve model generalizability when test data diverges from training distribution.
We propose a novel energy-based perspective, enhancing the model's perception of target data distributions.
arXiv Detail & Related papers (2023-11-24T10:49:49Z) - ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model
Reuse [59.500060790983994]
This paper introduces ZhiJian, a comprehensive and user-friendly toolbox for model reuse, utilizing the PyTorch backend.
ZhiJian presents a novel paradigm that unifies diverse perspectives on model reuse, encompassing target architecture construction with PTM, tuning target model with PTM, and PTM-based inference.
arXiv Detail & Related papers (2023-08-17T19:12:13Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuning [85.55727213502402]
We focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks.
We propose Sample-specific Ensemble of Source Models (SESoM)
SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs.
arXiv Detail & Related papers (2022-10-23T01:33:16Z) - Plug and Play Counterfactual Text Generation for Model Robustness [12.517365153658028]
We introduce CASPer, a plug-and-play counterfactual generation framework.
We show that CASPer effectively generates counterfactual text that follow the steering provided by an attribute model.
We also show that the generated counterfactuals can be used for augmenting the training data and thereby fixing and making the test model more robust.
arXiv Detail & Related papers (2022-06-21T14:25:21Z) - Model Selection for Production System via Automated Online Experiments [16.62275716351037]
A challenge that machine learning practitioners in the industry face is the task of selecting the best model to deploy in production.
Online controlled experiments such as A/B tests yield the most reliable estimation of the effectiveness of the whole system, but can only compare two or a few models due to budget constraints.
We propose an automated online experimentation mechanism that can efficiently perform model selection from a large pool of models.
arXiv Detail & Related papers (2021-05-27T19:48:23Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.