Leveraging Large Language Models for Use Case Model Generation from Software Requirements
- URL: http://arxiv.org/abs/2511.09231v2
- Date: Fri, 14 Nov 2025 01:46:43 GMT
- Title: Leveraging Large Language Models for Use Case Model Generation from Software Requirements
- Authors: Tobias Eisenreich, Nicholas Friedlaender, Stefan Wagner,
- Abstract summary: The proposed method integrates an open-weight LLM to systematically extract actors and use cases from software requirements.<n>The results show a substantial acceleration, reducing the modeling time by 60%.<n>Besides improving the modeling efficiency, the participants indicated that the method provided valuable guidance in the process.
- Score: 2.5501791028999583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Use case modeling employs user-centered scenarios to outline system requirements. These help to achieve consensus among relevant stakeholders. Because the manual creation of use case models is demanding and time-consuming, it is often skipped in practice. This study explores the potential of Large Language Models (LLMs) to assist in this tedious process. The proposed method integrates an open-weight LLM to systematically extract actors and use cases from software requirements with advanced prompt engineering techniques. The method is evaluated using an exploratory study conducted with five professional software engineers, which compares traditional manual modeling to the proposed LLM-based approach. The results show a substantial acceleration, reducing the modeling time by 60\%. At the same time, the model quality remains on par. Besides improving the modeling efficiency, the participants indicated that the method provided valuable guidance in the process.
Related papers
- Approach to Finding a Robust Deep Learning Model [0.28675177318965045]
The rapid development of machine learning (ML) and artificial intelligence (AI) applications requires the training of large numbers of models.<n>We propose a novel approach for determining model robustness using a proposed model selection algorithm designed as a meta-algorithm.<n>Within this framework, we address the influence of training sample size, model weight, and inductive bias on the robustness of deep learning models.
arXiv Detail & Related papers (2025-05-22T20:05:20Z) - Efficient Model Selection for Time Series Forecasting via LLMs [52.31535714387368]
We propose to leverage Large Language Models (LLMs) as a lightweight alternative for model selection.<n>Our method eliminates the need for explicit performance matrices by utilizing the inherent knowledge and reasoning capabilities of LLMs.
arXiv Detail & Related papers (2025-04-02T20:33:27Z) - A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Towards Synthetic Trace Generation of Modeling Operations using In-Context Learning Approach [1.8874331450711404]
We propose a conceptual framework that combines modeling event logs, intelligent modeling assistants, and the generation of modeling operations.
In particular, the architecture comprises modeling components that help the designer specify the system, record its operation within a graphical modeling environment, and automatically recommend relevant operations.
arXiv Detail & Related papers (2024-08-26T13:26:44Z) - Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - MAO: A Framework for Process Model Generation with Multi-Agent Orchestration [12.729855942941724]
This article explores a framework for automatically generating process models with multi-agent orchestration (MAO)
Large language models are prone to hallucinations, so the agents need to review and repair semantic hallucinations in process models.
Experiments demonstrate that the process models generated by our framework surpass manual modeling by 89%, 61%, 52%, and 75% on four different datasets.
arXiv Detail & Related papers (2024-08-04T03:32:17Z) - What is the best model? Application-driven Evaluation for Large Language Models [7.054112690519648]
A-Eval is an application-driven evaluation benchmark for general large language models.
We construct a dataset comprising 678 question-and-answer pairs through a process of collecting, annotating, and reviewing.
We reveal interesting laws regarding model scale and task difficulty level and propose a feasible method for selecting the best model.
arXiv Detail & Related papers (2024-06-14T04:52:15Z) - Retrieval-based Knowledge Transfer: An Effective Approach for Extreme
Large Language Model Compression [64.07696663255155]
Large-scale pre-trained language models (LLMs) have demonstrated exceptional performance in various natural language processing (NLP) tasks.
However, the massive size of these models poses huge challenges for their deployment in real-world applications.
We introduce a novel compression paradigm called Retrieval-based Knowledge Transfer (RetriKT) which effectively transfers the knowledge of LLMs to extremely small-scale models.
arXiv Detail & Related papers (2023-10-24T07:58:20Z) - Understanding User Intent Modeling for Conversational Recommender
Systems: A Systematic Literature Review [1.3630870408844922]
We conducted a systematic literature review to gather data on models typically employed in designing conversational recommender systems.
We developed a decision model to assist researchers in selecting the most suitable models for their systems.
Our study contributes practical insights and a comprehensive understanding of user intent modeling, empowering the development of more effective and personalized conversational recommender systems.
arXiv Detail & Related papers (2023-08-05T22:50:21Z) - Large Language Models in the Workplace: A Case Study on Prompt
Engineering for Job Type Classification [58.720142291102135]
This case study investigates the task of job classification in a real-world setting.
The goal is to determine whether an English-language job posting is appropriate for a graduate or entry-level position.
arXiv Detail & Related papers (2023-03-13T14:09:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.