eUDEVS: Executable UML with DEVS Theory of Modeling and Simulation
- URL: http://arxiv.org/abs/2407.08281v1
- Date: Thu, 11 Jul 2024 08:29:46 GMT
- Title: eUDEVS: Executable UML with DEVS Theory of Modeling and Simulation
- Authors: José L. Risco-Martín, J. M. Cruz, Saurabh Mittal, Bernard P. Zeigler,
- Abstract summary: Modeling and Simulation (M&S) for system design and prototyping is practiced today both in the industry and academia.
This paper presents an integrated approach towards crosstransformations between and DEVS using the proposed eUDEVS.
We also put the proposed eUDEVS in a much larger unifying framework called DEVS Unified Process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Modeling and Simulation (M&S) for system design and prototyping is practiced today both in the industry and academia. M&S are two different areas altogether and have specific objectives. However, most of the times these two separate areas are taken together. The developed code is tightly woven around both the model and the underlying simulator that executes it. This constraints both the model development and the simulation engine that impacts scalability of the developed code. Furthermore, a lot of time is spent in development of a model because it needs both domain knowledge and simulation techniques, which also requires communication among users and developers. Unified Modeling Language (UML) is widely accepted in the industry, whereas Discrete Event Specification (DEVS) based modeling that separates the model and the simulator, provides a cleaner methodology to develop models and is much used in academia. DEVS today is used by engineers who understand discrete event modeling at a much detailed level and are able to translate requirements to DEVS modeling code. There have been earlier efforts to integrate UML and DEVS but they haven't succeeded in providing a transformation mechanism due to inherent differences in these two modeling paradigms. This paper presents an integrated approach towards crosstransformations between UML and DEVS using the proposed eUDEVS, which stands for executable UML based on DEVS. Further, we will also show that the obtained DEVS models belong to a specific class of DEVS models called Finite Deterministic DEVS (FD-DEVS) that is available as a W3C XML Schema in XFD-DEVS. We also put the proposed eUDEVS in a much larger unifying framework called DEVS Unified Process that allows bifurcated model-continuity based lifecycle methodology for systems M&S. Finally, we demonstrate the laid concepts with a complete example.
Related papers
- Unifying Multimodal Large Language Model Capabilities and Modalities via Model Merging [103.98582374569789]
Model merging aims to combine multiple expert models into a single model, thereby reducing storage and serving costs.<n>Previous studies have primarily focused on merging visual classification models or Large Language Models (LLMs) for code and math tasks.<n>We introduce the model merging benchmark for MLLMs, which includes multiple tasks such as VQA, Geometry, Chart, OCR, and Grounding, providing both LoRA and full fine-tuning models.
arXiv Detail & Related papers (2025-05-26T12:23:14Z) - LLM-enabled Instance Model Generation [4.52634430160579]
This work explores the generation of instance models using large language models (LLMs)
We propose a two-step approach: first, using LLMs to produce a simplified structured output containing all necessary instance model information, and then compiling this intermediate representation into a valid XMI file.
Results show that the proposed method significantly improves the usability of LLMs for instance model generation tasks.
arXiv Detail & Related papers (2025-03-28T16:34:29Z) - VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks [60.5257456681402]
We build universal embedding models capable of handling a wide range of downstream tasks.
Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks (i.e. classification, visual question answering, multimodal retrieval, and visual grounding) and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB.
arXiv Detail & Related papers (2024-10-07T16:14:05Z) - Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach [1.8874331450711404]
We propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations.
In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations.
arXiv Detail & Related papers (2024-08-26T13:26:44Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Typing Requirement Model as Coroutines [6.039941247332987]
This paper contributes a type system that represents pre- and post-conditions in the contract sections in a requirement model.
By doing so, our type system ensures that the contracts defined in it are executed as outlined in the accompanied sequence diagram.
We assessed our approach using four case studies provided by RM2PT, validating the accuracy of the models.
arXiv Detail & Related papers (2024-05-16T12:50:15Z) - Design Patterns for Multilevel Modeling and Simulation [3.0248879829045383]
Multilevel modeling and simulation (M&S) is becoming increasingly relevant due to the benefits that this methodology offers.
This paper presents a set of design patterns that provide a systematic approach for designing and implementing multilevel models.
arXiv Detail & Related papers (2024-03-25T12:51:22Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation [80.49040344355431]
This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
arXiv Detail & Related papers (2020-06-15T23:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.