Un marco conceptual para la generación de requerimientos de software de calidad
- URL: http://arxiv.org/abs/2504.10654v1
- Date: Mon, 14 Apr 2025 19:12:18 GMT
- Title: Un marco conceptual para la generación de requerimientos de software de calidad
- Authors: Mauro José Pacchiotti, Mariel Ale y Luciana Ballejos,
- Abstract summary: Large language models (LLMs) have emerged to enhance natural language processing tasks.<n>This work aims to use these models to improve the quality of software requirements written in natural language.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Requirements expressed in natural language are an indispensable artifact in the software development process, as all stakeholders can understand them. However, their ambiguity poses a persistent challenge. To address this issue, organizations such as IEEE and INCOSE publish guidelines for writing requirements, offering rules that assist in this task. On the other hand, agile methodologies provide patterns and structures for expressing stakeholder needs in natural language, attempting to constrain the language to avoid ambiguity. Nevertheless, the knowledge gap among stakeholders regarding the requirements and the correct way to express them further complicates the specification task. In recent years, large language models (LLMs) have emerged to enhance natural language processing tasks. These are Deep learning-based architectures that emulate attention mechanisms like those of humans. This work aims to test the demonstrated power of LLMs in this domain. The objective is to use these models to improve the quality of software requirements written in natural language, assisting analysts in the requirements specification. The proposed framework, its architecture, key components, and their interactions are detailed. Furthermore, a conceptual test of the proposal is developed to assess its usefulness. Finally, the potential and limitations of the framework are discussed, along with future directions for its continued validation and refinement.
Related papers
- IOLBENCH: Benchmarking LLMs on Linguistic Reasoning [8.20398036986024]
We introduce IOLBENCH, a novel benchmark derived from International Linguistics Olympiad (IOL) problems.<n>This dataset encompasses diverse problems testing syntax, morphology, phonology, and semantics.<n>We find that even the most advanced models struggle to handle the intricacies of linguistic complexity.
arXiv Detail & Related papers (2025-01-08T03:15:10Z) - Improving Large Language Model (LLM) fidelity through context-aware grounding: A systematic approach to reliability and veracity [0.0]
Large Language Models (LLMs) are increasingly sophisticated and ubiquitous in natural language processing (NLP) applications.
This paper presents a novel framework for contextual grounding in textual models, with a particular emphasis on the Context Representation stage.
Our findings have significant implications for the deployment of LLMs in sensitive domains such as healthcare, legal systems, and social services.
arXiv Detail & Related papers (2024-08-07T18:12:02Z) - A Review of Hybrid and Ensemble in Deep Learning for Natural Language Processing [0.5266869303483376]
Review systematically introduces each task, delineates key architectures from Recurrent Neural Networks (RNNs) to Transformer-based models like BERT.
The adaptability of ensemble techniques is emphasized, highlighting their capacity to enhance various NLP applications.
Challenges in implementation, including computational overhead, overfitting, and model interpretation complexities, are addressed.
arXiv Detail & Related papers (2023-12-09T14:49:34Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - Natural Language Processing for Requirements Formalization: How to
Derive New Approaches? [0.32885740436059047]
We present and discuss principal ideas and state-of-the-art methodologies from the field of NLP.
We discuss two different approaches in detail and highlight the iterative development of rule sets.
The presented methods are demonstrated on two industrial use cases from the automotive and railway domains.
arXiv Detail & Related papers (2023-09-23T05:45:19Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Technical Report on Neural Language Models and Few-Shot Learning for
Systematic Requirements Processing in MDSE [1.6286277560322266]
This paper is based on the analysis of an open-source set of automotive requirements.
We derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality.
arXiv Detail & Related papers (2022-11-16T18:06:25Z) - Prompting Language Models for Linguistic Structure [73.11488464916668]
We present a structured prompting approach for linguistic structured prediction tasks.
We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
We find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels.
arXiv Detail & Related papers (2022-11-15T01:13:39Z) - ERICA: Improving Entity and Relation Understanding for Pre-trained
Language Models via Contrastive Learning [97.10875695679499]
We propose a novel contrastive learning framework named ERICA in pre-training phase to obtain a deeper understanding of the entities and their relations in text.
Experimental results demonstrate that our proposed ERICA framework achieves consistent improvements on several document-level language understanding tasks.
arXiv Detail & Related papers (2020-12-30T03:35:22Z) - Lexically-constrained Text Generation through Commonsense Knowledge
Extraction and Injection [62.071938098215085]
We focus on the Commongen benchmark, wherein the aim is to generate a plausible sentence for a given set of input concepts.
We propose strategies for enhancing the semantic correctness of the generated text.
arXiv Detail & Related papers (2020-12-19T23:23:40Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.