Conformance Checking for Pushdown Reactive Systems based on Visibly
Pushdown Languages
- URL: http://arxiv.org/abs/2308.07177v1
- Date: Mon, 14 Aug 2023 14:37:43 GMT
- Title: Conformance Checking for Pushdown Reactive Systems based on Visibly
Pushdown Languages
- Authors: Adilson Luiz Bonifacio
- Abstract summary: Testing pushdown reactive systems is deemed important to guarantee a precise and robust software development process.
We show that test suites with a complete fault coverage can be generated using this conformance relation for pushdown reactive systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Testing pushdown reactive systems is deemed important to guarantee a precise
and robust software development process. Usually, such systems can be specified
by the formalism of Input/Output Visibly Pushdown Labeled Transition System
(IOVPTS), where the interaction with the environment is regulated by a pushdown
memory. Hence a conformance checking can be applied in a testing process to
verify whether an implementation is in compliance to a specification using an
appropriate conformance relation. In this work we establish a novelty
conformance relation based on Visibly Pushdown Languages (VPLs) that can model
sets of desirable and undesirable behaviors of systems. Further, we show that
test suites with a complete fault coverage can be generated using this
conformance relation for pushdown reactive systems.
Related papers
- Keeping Behavioral Programs Alive: Specifying and Executing Liveness Requirements [2.4387555567462647]
We propose an idiom for tagging states with "must-finish," indicating that tasks are yet to be completed.
We also offer semantics and two execution mechanisms, one based on a translation to B"uchi automata and the other based on a Markov decision process (MDP)
arXiv Detail & Related papers (2024-04-02T11:36:58Z) - Discovering Decision Manifolds to Assure Trusted Autonomous Systems [0.0]
We propose an optimization-based search technique for capturing the range of correct and incorrect responses a system could exhibit.
This manifold provides a more detailed understanding of system reliability than traditional testing or Monte Carlo simulations.
In this proof-of-concept, we apply our method to a software-in-the-loop evaluation of an autonomous vehicle.
arXiv Detail & Related papers (2024-02-12T16:55:58Z) - Learning Recovery Strategies for Dynamic Self-healing in Reactive
Systems [1.7218973692320518]
Self-healing systems depend on following a set of predefined instructions to recover from a known failure state.
Our proposal targets complex reactive systems, defining monitors as predicates specifying satisfiability conditions of system properties.
We use a Reinforcement Learning-based technique to learn a recovery strategy based on users' corrective sequences.
arXiv Detail & Related papers (2024-01-22T23:34:21Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Relational Action Bases: Formalization, Effective Safety Verification,
and Invariants (Extended Version) [67.99023219822564]
We introduce the general framework of relational action bases (RABs)
RABs generalize existing models by lifting both restrictions.
We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes.
arXiv Detail & Related papers (2022-08-12T17:03:50Z) - Conformance Checking with Uncertainty via SMT (Extended Version) [66.58864135810981]
We show how to solve the problem of checking conformance of uncertain logs against data-aware reference processes.
Our approach is modular, in that it homogeneously accommodates for different types of uncertainty.
We show the correctness of our approach and witness feasibility through a proof-of-concept implementation.
arXiv Detail & Related papers (2022-06-15T11:39:45Z) - Towards Dynamic Consistency Checking in Goal-directed Predicate Answer
Set Programming [2.3204178451683264]
We present a variation of the top-down evaluation strategy, termed Dynamic Consistency checking.
This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program.
We have experimentally observed speedups of up to 90x w.r.t. the standard versions of s(CASP)
arXiv Detail & Related papers (2021-10-22T20:38:48Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - A Controllable Model of Grounded Response Generation [122.7121624884747]
Current end-to-end neural conversation models inherently lack the flexibility to impose semantic control in the response generation process.
We propose a framework that we call controllable grounded response generation (CGRG)
We show that using this framework, a transformer based model with a novel inductive attention mechanism, trained on a conversation-like Reddit dataset, outperforms strong generation baselines.
arXiv Detail & Related papers (2020-05-01T21:22:08Z) - Conditional Self-Attention for Query-based Summarization [49.616774159367516]
We propose textitconditional self-attention (CSA), a neural network module designed for conditional dependency modeling.
Experiments on Debatepedia and HotpotQA benchmark datasets show CSA consistently outperforms vanilla Transformer.
arXiv Detail & Related papers (2020-02-18T02:22:31Z) - Counter-example Guided Learning of Bounds on Environment Behavior [11.357397596759172]
We present a data-driven solution that allows for a system to be evaluated for specification conformance without an accurate model of the environment.
Our approach involves learning a conservative reactive bound of the environment's behavior using data and specification of the system's desired behavior.
arXiv Detail & Related papers (2020-01-20T19:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.