Formal Verification Of A Shopping Basket Application Model Using PRISM
- URL: http://arxiv.org/abs/2308.00618v1
- Date: Sun, 16 Jul 2023 00:14:40 GMT
- Title: Formal Verification Of A Shopping Basket Application Model Using PRISM
- Authors: Patrick Mukala
- Abstract summary: We present the results of a simulation using Prism Model Checker for a Shopping Basket Application Model.
The objective is to simulate the behavior of shoppers as they go through a number of defined states of the shopping process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Formal verification is at the heart of model validation and correctness. With
model checking, invaluable realizations have been accomplished in software
engineering and particularly in software development. By means of this
approach, complex applications can be simulated and their performance
forecasted in light with requirements at hands and expected performance. In
this short paper we present the results of a simulation using Prism Model
Checker for a Shopping Basket Application Model. Applied on a modified model
from a projected process model, the objective is to simulate the behavior of
shoppers as they go through a number of defined states of the shopping process
and express accessibility and reachability through a number of defined
properties.
Related papers
- Mining Constraints from Reference Process Models for Detecting Best-Practice Violations in Event Log [1.389948527681755]
We propose a framework for mining declarative best-practice constraints from a reference model collection.
We demonstrate the capability of our framework to detect best-practice violations through an evaluation based on real-world process model collections and event logs.
arXiv Detail & Related papers (2024-07-02T15:05:37Z) - Verifiable evaluations of machine learning models using zkSNARKs [40.538081946945596]
This work presents a method of verifiable model evaluation using model inference through zkSNARKs.
The resulting zero-knowledge computational proofs of model outputs over datasets can be packaged into verifiable evaluation attestations.
For the first time, we demonstrate this across a sample of real-world models and highlight key challenges and design solutions.
arXiv Detail & Related papers (2024-02-05T02:21:11Z) - fairml: A Statistician's Take on Fair Machine Learning Modelling [0.0]
We describe the fairml package which implements our previous work (Scutari, Panero, and Proissl 2022) and related models in the literature.
fairml is designed around classical statistical models and penalised regression results.
The constraint used to enforce fairness is to model estimation, making it possible to mix-and-match the desired model family and fairness definition for each application.
arXiv Detail & Related papers (2023-05-03T09:59:53Z) - Earning Extra Performance from Restrictive Feedbacks [41.05874087063763]
We set up a challenge named emphEarning eXtra PerformancE from restriCTive feEDdbacks (EXPECTED) to describe this form of model tuning problems.
The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks.
We propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution.
arXiv Detail & Related papers (2023-04-28T13:16:54Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z) - On the Limits of Evaluating Embodied Agent Model Generalization Using
Validation Sets [101.28658250723804]
This paper experiments with augmenting a transformer model with modules that effectively utilize a wider field of view and learn to choose whether the next step requires a navigation or manipulation action.
We observe that the proposed modules resulted in improved, and in fact state-of-the-art performance on an unseen validation set of a popular benchmark dataset, ALFRED.
We highlight this result as we believe it may be a wider phenomenon in machine learning tasks but primarily noticeable only in benchmarks that limit evaluations on test splits.
arXiv Detail & Related papers (2022-05-18T23:52:21Z) - Calibrating Over-Parametrized Simulation Models: A Framework via
Eligibility Set [3.862247454265944]
We develop a framework to develop calibration schemes that satisfy rigorous frequentist statistical guarantees.
We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
arXiv Detail & Related papers (2021-05-27T00:59:29Z) - How to Design Sample and Computationally Efficient VQA Models [53.65668097847456]
We find that representing the text as probabilistic programs and images as object-level scene graphs best satisfy these desiderata.
We extend existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner.
arXiv Detail & Related papers (2021-03-22T01:48:16Z) - DirectDebug: Automated Testing and Debugging of Feature Models [55.41644538483948]
Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
arXiv Detail & Related papers (2021-02-11T11:22:20Z) - Model Reuse with Reduced Kernel Mean Embedding Specification [70.044322798187]
We present a two-phase framework for finding helpful models for a current application.
In the upload phase, when a model is uploading into the pool, we construct a reduced kernel mean embedding (RKME) as a specification for the model.
Then in the deployment phase, the relatedness of the current task and pre-trained models will be measured based on the value of the RKME specification.
arXiv Detail & Related papers (2020-01-20T15:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.