Formal Verification Of A Shopping Basket Application Model Using PRISM
- URL: http://arxiv.org/abs/2308.00618v1
- Date: Sun, 16 Jul 2023 00:14:40 GMT
- Title: Formal Verification Of A Shopping Basket Application Model Using PRISM
- Authors: Patrick Mukala
- Abstract summary: We present the results of a simulation using Prism Model Checker for a Shopping Basket Application Model.
The objective is to simulate the behavior of shoppers as they go through a number of defined states of the shopping process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Formal verification is at the heart of model validation and correctness. With
model checking, invaluable realizations have been accomplished in software
engineering and particularly in software development. By means of this
approach, complex applications can be simulated and their performance
forecasted in light with requirements at hands and expected performance. In
this short paper we present the results of a simulation using Prism Model
Checker for a Shopping Basket Application Model. Applied on a modified model
from a projected process model, the objective is to simulate the behavior of
shoppers as they go through a number of defined states of the shopping process
and express accessibility and reachability through a number of defined
properties.
Related papers
- Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach [1.8874331450711404]
We propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations.
In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations.
arXiv Detail & Related papers (2024-08-26T13:26:44Z) - Semantic Capability Model for the Simulation of Manufacturing Processes [38.69817856379812]
Simulations offer opportunities in the examination of manufacturing processes.
A combination of different simulations is necessary when the outputs of one simulation serve as the input parameters for another, resulting in a sequence of simulations.
An information model is introduced, which represents simulations, their capabilities to generate certain knowledge, and their respective quality criteria.
arXiv Detail & Related papers (2024-08-15T09:28:08Z) - Mining Constraints from Reference Process Models for Detecting Best-Practice Violations in Event Log [1.389948527681755]
We propose a framework for mining declarative best-practice constraints from a reference model collection.
We demonstrate the capability of our framework to detect best-practice violations through an evaluation based on real-world process model collections and event logs.
arXiv Detail & Related papers (2024-07-02T15:05:37Z) - Verifiable evaluations of machine learning models using zkSNARKs [40.538081946945596]
This work presents a method of verifiable model evaluation using model inference through zkSNARKs.
The resulting zero-knowledge computational proofs of model outputs over datasets can be packaged into verifiable evaluation attestations.
For the first time, we demonstrate this across a sample of real-world models and highlight key challenges and design solutions.
arXiv Detail & Related papers (2024-02-05T02:21:11Z) - Earning Extra Performance from Restrictive Feedbacks [41.05874087063763]
We set up a challenge named emphEarning eXtra PerformancE from restriCTive feEDdbacks (EXPECTED) to describe this form of model tuning problems.
The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks.
We propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution.
arXiv Detail & Related papers (2023-04-28T13:16:54Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z) - On the Limits of Evaluating Embodied Agent Model Generalization Using
Validation Sets [101.28658250723804]
This paper experiments with augmenting a transformer model with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action.
We observe that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED.
We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits.
arXiv Detail & Related papers (2022-05-18T23:52:21Z) - How to Design Sample and Computationally Efficient VQA Models [53.65668097847456]
We find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata.
We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner.
arXiv Detail & Related papers (2021-03-22T01:48:16Z) - DirectDebug: Automated Testing and Debugging of Feature Models [55.41644538483948]
Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
arXiv Detail & Related papers (2021-02-11T11:22:20Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.