To Do or Not to Do: Semantics and Patterns for Do Activities in UML PSSM State Machines
- URL: http://arxiv.org/abs/2309.14884v3
- Date: Mon, 8 Jul 2024 11:57:46 GMT
- Title: To Do or Not to Do: Semantics and Patterns for Do Activities in UML PSSM State Machines
- Authors: Márton Elekes, Vince Molnár, Zoltán Micskei,
- Abstract summary: DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state.
The specification or textbooks are vague about how the doActivity behavior construct should be appropriately used.
We analyzed the semantics by collecting evidence from cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM.
- Score: 0.11470070927586014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State machines are used in engineering many types of software-intensive systems. UML State Machines extend simple finite state machines with powerful constructs. Among the many extensions, there is one seemingly simple and innocent language construct that fundamentally changes state machines' reactive model of computation: doActivity behaviors. DoActivity behaviors describe behavior that is executed independently from the state machine once entered in a given state, typically modeling complex computation or communication as background tasks. However, the UML specification or textbooks are vague about how the doActivity behavior construct should be appropriately used. This lack of guidance is a severe issue as, when improperly used, doActivities can cause concurrent, non-deterministic bugs that are especially challenging to find and could ruin a seemingly correct software design. The Precise Semantics of UML State Machines (PSSM) specification introduced detailed operational semantics for state machines. To the best of our knowledge, there is no rigorous review yet of doActivity's semantics as specified in PSSM. We analyzed the semantics by collecting evidence from cross-checking the text of the specification, its semantic model and executable test cases, and the simulators supporting PSSM. We synthesized insights about subtle details and emergent behaviors relevant to tool developers and advanced modelers. We reported inconsistencies and missing clarifications in more than 20 issues to the standardization committee. Based on these insights, we studied 11 patterns for doActivities detailing the consequences of using a doActivity in a given situation and discussing countermeasures or alternative design choices. We hope that our analysis of the semantics and the patterns help vendors develop conformant simulators or verification tools and engineers design better state machine models.
Related papers
- Formalizing UML State Machines for Automated Verification -- A Survey [14.99225452541953]
The Modeling Language (UML) is a standard for modeling dynamic systems.
The specification, maintained by the Object Management Group (OMG), is documented in natural language.
arXiv Detail & Related papers (2024-07-24T12:15:31Z) - Learning with Language-Guided State Abstractions [58.199148890064826]
Generalizable policy learning in high-dimensional observation spaces is facilitated by well-designed state representations.
Our method, LGA, uses a combination of natural language supervision and background knowledge from language models to automatically build state representations tailored to unseen tasks.
Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time.
arXiv Detail & Related papers (2024-02-28T23:57:04Z) - Transformer-based Causal Language Models Perform Clustering [20.430255724239448]
We introduce a simplified instruction-following task and use synthetic datasets to analyze a Transformer-based causal language model.
Our findings suggest that the model learns task-specific information by clustering data within its hidden space, with this clustering process evolving dynamically during learning.
arXiv Detail & Related papers (2024-02-19T14:02:31Z) - Towards Truly Zero-shot Compositional Visual Reasoning with LLMs as Programmers [54.83459025465947]
Even the largest models struggle with compositional reasoning, generalization, fine-grained spatial and temporal reasoning, and counting.
Visual reasoning with large language models (LLMs) as controllers can, in principle, address these limitations by decomposing the task and solving subtasks by orchestrating a set of (visual) tools.
We present a framework that mitigates these issues by introducing spatially and temporally abstract routines and by leveraging a small number of labeled examples to automatically generate in-context examples.
arXiv Detail & Related papers (2024-01-03T20:48:47Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Eliciting Human Preferences with Language Models [56.68637202313052]
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
We propose to use *LMs themselves* to guide the task specification process.
We study GATE in three domains: email validation, content recommendation, and moral reasoning.
arXiv Detail & Related papers (2023-10-17T21:11:21Z) - Generative Models as a Complex Systems Science: How can we make sense of
large language model behavior? [75.79305790453654]
Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP.
We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance.
arXiv Detail & Related papers (2023-07-31T22:58:41Z) - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model [63.66204449776262]
Instruct2Act is a framework that maps multi-modal instructions to sequential actions for robotic manipulation tasks.
Our approach is adjustable and flexible in accommodating various instruction modalities and input types.
Our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks.
arXiv Detail & Related papers (2023-05-18T17:59:49Z) - Modelling Concurrency Bugs Using Machine Learning [0.0]
This project aims to compare both common and recent machine learning approaches.
We define a synthetic dataset that we generate with the scope of simulating real-life (concurrent) programs.
We formulate hypotheses about fundamental limits of various machine learning model types.
arXiv Detail & Related papers (2023-05-08T17:30:24Z) - Semantic based model of Conceptual Work Products for formal verification
of complex interactive systems [3.0458872052651973]
We describe an automatic logic reasoner to verify objective specifications for conceptual work products.
The conceptual work products specifications serve as a fundamental output requirement, which must be clearly stated, correct and solvable.
Our Work Ontology with tools from Semantic Web is needed to translate class and state diagrams for verification of solvability with automatic reasoning.
arXiv Detail & Related papers (2020-08-04T15:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.