Event-Driven Inconsistency Detection Between UML Class and Sequence Diagrams
- URL: http://arxiv.org/abs/2511.07742v1
- Date: Wed, 12 Nov 2025 01:14:23 GMT
- Title: Event-Driven Inconsistency Detection Between UML Class and Sequence Diagrams
- Authors: Luan Lazzari, Kleinner Farias,
- Abstract summary: Software engineering students often struggle to understand and manage inconsistencies during the modeling process.<n>Educators and students often struggle to understand and manage inconsistencies that arise during the modeling process.<n>The tool adopts an event-driven architecture that continuously monitors modeling actions and notifies users of emerging inconsistencies in real time.
- Score: 0.6660458629649825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling is a central and demanding activity in software engineering that requires skills such as abstraction, consistency maintenance, and precise communication. These skills are difficult to master and even harder to teach effectively. Educators and students often struggle to understand and manage inconsistencies that arise during the modeling process. To address this challenge, we present \texttt{Harmony Validator}, a tool integrated as a plugin for the Papyrus modeling environment, designed to automatically detect and report inconsistencies in UML models, including class and sequence diagrams. The tool adopts an event-driven architecture that continuously monitors modeling actions and notifies users of emerging inconsistencies in real time. This approach enhances awareness of model integrity and supports the iterative refinement of design artifacts. The paper describes the architecture, detection mechanisms, and usage scenarios of Harmony Validator. It also includes a case study conducted with students in a software engineering course to evaluate the perceived usefulness and benefits of UML modeling in teaching and learning. Our results indicate that Harmony Validator fosters a better understanding of model consistency and promotes reflective learning practices in software modeling education.
Related papers
- NOMAD: A Multi-Agent LLM System for UML Class Diagram Generation from Natural Language Requirements [20.080985332719383]
Large Language Models (LLMs) are increasingly utilised in software engineering, yet their ability to generate structured artefacts such as diagrams remains underexplored.<n>In this work we present NOMAD, a cognitively inspired, modular multi-agent framework that decomposes generation into a series of role-specialised subtasks.<n>Each agent handles a distinct modelling activity, such as entity extraction, relationship classification, synthesis diagram, mirroring the goal-directed reasoning processes of an engineer.
arXiv Detail & Related papers (2025-11-27T12:36:25Z) - MCeT: Behavioral Model Correctness Evaluation using Large Language Models [4.34964016971127]
With the growing use of Large Language Models (LLM) as AI modeling assistants, more automation will be involved in generating diagrams.<n>We propose MCeT, the first fully automated tool to evaluate the correctness of a behavioral model, sequence diagrams in particular, against its corresponding requirements text.
arXiv Detail & Related papers (2025-08-01T13:41:58Z) - MLE-Dojo: Interactive Environments for Empowering LLM Agents in Machine Learning Engineering [57.156093929365255]
Gym-style framework for systematically reinforcement learning, evaluating, and improving autonomous large language model (LLM) agents.<n>MLE-Dojo covers diverse, open-ended MLE tasks carefully curated to reflect realistic engineering scenarios.<n>Its fully executable environment supports comprehensive agent training via both supervised fine-tuning and reinforcement learning.
arXiv Detail & Related papers (2025-05-12T17:35:43Z) - ToolACE-DEV: Self-Improving Tool Learning via Decomposition and EVolution [77.86222359025011]
We propose ToolACE-DEV, a self-improving framework for tool learning.<n>First, we decompose the tool-learning objective into sub-tasks that enhance basic tool-making and tool-using abilities.<n>We then introduce a self-evolving paradigm that allows lightweight models to self-improve, reducing reliance on advanced LLMs.
arXiv Detail & Related papers (2025-05-12T12:48:30Z) - Model Utility Law: Evaluating LLMs beyond Performance through Mechanism Interpretable Metric [99.56567010306807]
Large Language Models (LLMs) have become indispensable across academia, industry, and daily applications.<n>One core challenge of evaluation in the large language model (LLM) era is the generalization issue.<n>We propose Model Utilization Index (MUI), a mechanism interpretability enhanced metric that complements traditional performance scores.
arXiv Detail & Related papers (2025-04-10T04:09:47Z) - Thinging Machines for Requirements Engineering: Superseding Flowchart-Based Modeling [0.0]
It is claimed that present elicitation of requirements models focus on collecting information using natural language.<n>It is proposed that a solution to this problem involves using complexity theory, transdisciplinarity, multidimensionality and knowledge management.
arXiv Detail & Related papers (2025-01-28T05:30:45Z) - Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach [1.8874331450711404]
We propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations.
In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations.
arXiv Detail & Related papers (2024-08-26T13:26:44Z) - CogCoM: A Visual Language Model with Chain-of-Manipulations Reasoning [61.21923643289266]
Chain of Manipulations is a mechanism that enables Vision-Language Models to solve problems step-by-step with evidence.<n>After training, models can solve various visual problems by eliciting intrinsic manipulations (e.g., grounding, zoom in) actively without involving external tools.<n>Our trained model, textbfCogCoM, achieves state-of-the-art performance across 9 benchmarks from 4 categories.
arXiv Detail & Related papers (2024-02-06T18:43:48Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.