Vision to Specification: Automating the Transition from Conceptual Features to Functional Requirements
- URL: http://arxiv.org/abs/2505.12262v1
- Date: Sun, 18 May 2025 07:01:50 GMT
- Title: Vision to Specification: Automating the Transition from Conceptual Features to Functional Requirements
- Authors: Xiaoli Lian, Jiajun Wu, Xiaoyun Gao, Shuaisong Wang, Li Zhang,
- Abstract summary: The EasyFR approach recommends Semantic Role Labeling sequences for the given abstract features to guide Pre-trained Language Models (PLMs) in producing cohesive functional requirements (FRs)<n>Our results indicate a notable step forward in the realm of automated requirements synthesis, holding potential to improve the process of requirements specification in future software projects.
- Score: 10.85799957734291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The translation of high-level abstract features into clear, and testable functional requirements (FRs) is a crucial step in software development, bridging the gap between user needs and technical specifications. In engineering practice, significant expert effort is needed for this translation. Our approach, EasyFR, streamlines the process by recommending Semantic Role Labeling (SRL) sequences for the given abstract features to guide Pre-trained Language Models (PLMs) in producing cohesive FR statements. By analyzing ten diverse datasets, we induce two variable SRL templates, each including two configurable parts. For concrete features, our proposed Key2Temp model can construct the appropriate variant of the SRL template by identifying a variable SRL template and placing the feature tokens in the appropriate slots. In this way, our approach reframes the process of requirement generation into a structured slot-filling activity. Experimental validation on four open datasets demonstrates that EasyFR outperforms three advanced Natural language generation (NLG) approaches, including GPT4, particularly when existing FRs are available for training. The positive influence of our SRL template variant recommendations is further confirmed through an ablation study. We believe that our results indicate a notable step forward in the realm of automated requirements synthesis, holding potential to improve the process of requirements specification in future software projects.
Related papers
- LARES: Latent Reasoning for Sequential Recommendation [96.26996622771593]
We present LARES, a novel and scalable LAtent REasoning framework for Sequential recommendation.<n>Our proposed approach employs a recurrent architecture that allows flexible expansion of reasoning depth without increasing parameter complexity.<n>Our framework exhibits seamless compatibility with existing advanced models, further improving their recommendation performance.
arXiv Detail & Related papers (2025-05-22T16:22:54Z) - Teaching Large Language Models to Maintain Contextual Faithfulness via Synthetic Tasks and Reinforcement Learning [80.27561080938747]
We propose a systematic framework, CANOE, to improve the faithfulness of large language models (LLMs) in both short-form and long-form generation tasks without human annotations.<n>Also, we propose Dual-GRPO, a rule-based reinforcement learning method that includes three tailored rule-based rewards derived from synthesized short-form QA data.<n> Experimental results show that CANOE greatly improves the faithfulness of LLMs across 11 different downstream tasks, even outperforming the most advanced LLMs.
arXiv Detail & Related papers (2025-05-22T10:10:07Z) - How Effective are Generative Large Language Models in Performing Requirements Classification? [4.429729688079712]
This study explores the effectiveness of three generative large language models (LLMs) performing both binary and multi-class requirements classification.<n>Our study concludes that while factors like prompt design and LLM architecture are universally important, others-such as dataset variations-have a more situational impact, depending on the complexity of the classification task.
arXiv Detail & Related papers (2025-04-23T14:41:11Z) - Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
In-Context Learning (ICL) and.
Efficient Fine-Tuning (PEFT) are currently two mainstream methods for augmenting.
LLMs to downstream tasks.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - ToffA-DSPL: an approach of trade-off analysis for designing dynamic software product lines [3.623080116477751]
We propose an approach of Trade-off Analysis for DSPL at design-time, named ToffA-DSPL.
It deals with the configuration selection process considering interactions between NFRs and contexts.
In general, the configurations suggested by ToffA-DSPL provide high satisfaction levels of NFRs.
arXiv Detail & Related papers (2024-07-01T18:51:32Z) - Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models [12.656574142412484]
We make an attempt to understand the correlation between supervised fine-tuning and reinforcement learning.<n>We find that both atomic and synthetic functions are indispensable for SFT's generalization.
arXiv Detail & Related papers (2024-06-14T03:39:01Z) - TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data [73.29220562541204]
We consider harnessing the amazing power of language models (LLMs) to solve our task.
We develop a TAT-LLM language model by fine-tuning LLaMA 2 with the training data generated automatically from existing expert-annotated datasets.
arXiv Detail & Related papers (2024-01-24T04:28:50Z) - Natural Language Processing for Requirements Formalization: How to
Derive New Approaches? [0.32885740436059047]
We present and discuss principal ideas and state-of-the-art methodologies from the field of NLP.
We discuss two different approaches in detail and highlight the iterative development of rule sets.
The presented methods are demonstrated on two industrial use cases from the automotive and railway domains.
arXiv Detail & Related papers (2023-09-23T05:45:19Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Pre-trained Language Models for Keyphrase Generation: A Thorough
Empirical Study [76.52997424694767]
We present an in-depth empirical study of keyphrase extraction and keyphrase generation using pre-trained language models.
We show that PLMs have competitive high-resource performance and state-of-the-art low-resource performance.
Further results show that in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models.
arXiv Detail & Related papers (2022-12-20T13:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.