Natural Language Processing for Requirements Formalization: How to
Derive New Approaches?
- URL: http://arxiv.org/abs/2309.13272v1
- Date: Sat, 23 Sep 2023 05:45:19 GMT
- Title: Natural Language Processing for Requirements Formalization: How to
Derive New Approaches?
- Authors: Viju Sudhi and Libin Kutty and Robin Gr\"opler
- Abstract summary: We present and discuss principal ideas and state-of-the-art methodologies from the field of NLP.
We discuss two different approaches in detail and highlight the iterative development of rule sets.
The presented methods are demonstrated on two industrial use cases from the automotive and railway domains.
- Score: 0.32885740436059047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is a long-standing desire of industry and research to automate the
software development and testing process as much as possible. In this process,
requirements engineering (RE) plays a fundamental role for all other steps that
build on it. Model-based design and testing methods have been developed to
handle the growing complexity and variability of software systems. However,
major effort is still required to create specification models from a large set
of functional requirements provided in natural language. Numerous approaches
based on natural language processing (NLP) have been proposed in the literature
to generate requirements models using mainly syntactic properties. Recent
advances in NLP show that semantic quantities can also be identified and used
to provide better assistance in the requirements formalization process. In this
work, we present and discuss principal ideas and state-of-the-art methodologies
from the field of NLP in order to guide the readers on how to create a set of
rules and methods for the semi-automated formalization of requirements
according to their specific use case and needs. We discuss two different
approaches in detail and highlight the iterative development of rule sets. The
requirements models are represented in a human- and machine-readable format in
the form of pseudocode. The presented methods are demonstrated on two
industrial use cases from the automotive and railway domains. It shows that
using current pre-trained NLP models requires less effort to create a set of
rules and can be easily adapted to specific use cases and domains. In addition,
findings and shortcomings of this research area are highlighted and an outlook
on possible future developments is given.
Related papers
- Reference Trustable Decoding: A Training-Free Augmentation Paradigm for Large Language Models [79.41139393080736]
Large language models (LLMs) have rapidly advanced and demonstrated impressive capabilities.
We propose Reference Trustable Decoding (RTD), a paradigm that allows models to quickly adapt to new tasks without fine-tuning.
arXiv Detail & Related papers (2024-09-30T10:48:20Z) - Towards Generalist Prompting for Large Language Models by Mental Models [105.03747314550591]
Large language models (LLMs) have demonstrated impressive performance on many tasks.
To achieve optimal performance, specially designed prompting methods are still needed.
We introduce the concept of generalist prompting, which operates on the design principle of achieving optimal or near-optimal performance.
arXiv Detail & Related papers (2024-02-28T11:29:09Z) - Practical Guidelines for the Selection and Evaluation of Natural Language Processing Techniques in Requirements Engineering [8.779031107963942]
Natural language (NL) is now a cornerstone of requirements automation.
With so many different NLP solution strategies available, it can be challenging to choose the right strategy for a specific RE task.
In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods.
arXiv Detail & Related papers (2024-01-03T02:24:35Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - Requirement Formalisation using Natural Language Processing and Machine
Learning: A Systematic Review [11.292853646607888]
We conducted a systematic literature review to outline the current state-of-the-art of NLP and ML techniques in Requirement Engineering.
We found that NLP approaches are the most common NLP techniques used for automatic RF, primary operating on structured and semi-structured data.
This study also revealed that Deep Learning (DL) technique are not widely used, instead classical ML techniques are predominant in the surveyed studies.
arXiv Detail & Related papers (2023-03-18T17:36:21Z) - Foundation Models for Natural Language Processing -- Pre-trained
Language Models Integrating Media [0.0]
Foundation Models are pre-trained language models for Natural Language Processing.
They can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning.
This book provides a comprehensive overview of the state of the art in research and applications of Foundation Models.
arXiv Detail & Related papers (2023-02-16T20:42:04Z) - Technical Report on Neural Language Models and Few-Shot Learning for
Systematic Requirements Processing in MDSE [1.6286277560322266]
This paper is based on the analysis of an open-source set of automotive requirements.
We derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality.
arXiv Detail & Related papers (2022-11-16T18:06:25Z) - The Use of NLP-Based Text Representation Techniques to Support
Requirement Engineering Tasks: A Systematic Mapping Review [1.5469452301122177]
The research direction has changed from the use of lexical and syntactic features to the use of advanced embedding techniques.
We identify four gaps in the existing literature, why they matter, and how future research can begin to address them.
arXiv Detail & Related papers (2022-05-17T02:47:26Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.