Technical Report on Neural Language Models and Few-Shot Learning for
Systematic Requirements Processing in MDSE
- URL: http://arxiv.org/abs/2211.09084v1
- Date: Wed, 16 Nov 2022 18:06:25 GMT
- Title: Technical Report on Neural Language Models and Few-Shot Learning for
Systematic Requirements Processing in MDSE
- Authors: Vincent Bertram, Miriam Bo{\ss}, Evgeny Kusmenko, Imke Helene
Nachmann, Bernhard Rumpe, Danilo Trotta, Louis Wachtmeister
- Abstract summary: This paper is based on the analysis of an open-source set of automotive requirements.
We derive domain-specific language constructs helping us to avoid ambiguities in requirements and increase the level of formality.
- Score: 1.6286277560322266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Systems engineering, in particular in the automotive domain, needs to cope
with the massively increasing numbers of requirements that arise during the
development process. To guarantee a high product quality and make sure that
functional safety standards such as ISO26262 are fulfilled, the exploitation of
potentials of model-driven systems engineering in the form of automatic
analyses, consistency checks, and tracing mechanisms is indispensable. However,
the language in which requirements are written, and the tools needed to operate
on them, are highly individual and require domain-specific tailoring. This
hinders automated processing of requirements as well as the linking of
requirements to models. Introducing formal requirement notations in existing
projects leads to the challenge of translating masses of requirements and
process changes on the one hand and to the necessity of the corresponding
training for the requirements engineers.
In this paper, based on the analysis of an open-source set of automotive
requirements, we derive domain-specific language constructs helping us to avoid
ambiguities in requirements and increase the level of formality. The main
contribution is the adoption and evaluation of few-shot learning with large
pretrained language models for the automated translation of informal
requirements to structured languages such as a requirement DSL. We show that
support sets of less than ten translation examples can suffice to few-shot
train a language model to incorporate keywords and implement syntactic rules
into informal natural language requirements.
Related papers
- Prompting Video-Language Foundation Models with Domain-specific Fine-grained Heuristics for Video Question Answering [71.62961521518731]
HeurVidQA is a framework that leverages domain-specific entity-actions to refine pre-trained video-language foundation models.
Our approach treats these models as implicit knowledge engines, employing domain-specific entity-action prompters to direct the model's focus toward precise cues that enhance reasoning.
arXiv Detail & Related papers (2024-10-12T06:22:23Z) - Prompting Encoder Models for Zero-Shot Classification: A Cross-Domain Study in Italian [75.94354349994576]
This paper explores the feasibility of employing smaller, domain-specific encoder LMs alongside prompting techniques to enhance performance in specialized contexts.
Our study concentrates on the Italian bureaucratic and legal language, experimenting with both general-purpose and further pre-trained encoder-only models.
The results indicate that while further pre-trained models may show diminished robustness in general knowledge, they exhibit superior adaptability for domain-specific tasks, even in a zero-shot setting.
arXiv Detail & Related papers (2024-07-30T08:50:16Z) - Engineering Safety Requirements for Autonomous Driving with Large Language Models [0.6699222582814232]
Large Language Models (LLMs) can play a key role in automatically refining and decomposing requirements after each update.
This study proposes a prototype of a pipeline of prompts and LLMs that receives an item definition and outputs solutions in the form of safety requirements.
arXiv Detail & Related papers (2024-03-24T20:40:51Z) - DIALIGHT: Lightweight Multilingual Development and Evaluation of
Task-Oriented Dialogue Systems with Large Language Models [76.79929883963275]
DIALIGHT is a toolkit for developing and evaluating multilingual Task-Oriented Dialogue (ToD) systems.
It features a secure, user-friendly web interface for fine-grained human evaluation at both local utterance level and global dialogue level.
Our evaluations reveal that while PLM fine-tuning leads to higher accuracy and coherence, LLM-based systems excel in producing diverse and likeable responses.
arXiv Detail & Related papers (2024-01-04T11:27:48Z) - Validation of Rigorous Requirements Specifications and Document
Automation with the ITLingo RSL Language [0.0]
ITLingo initiative has introduced a requirements specification language named RSL to enhance the rigor and consistency of technical documentation.
This paper reviews existing research and tools in the fields of requirements validation and document automation.
We propose to extend RSL with validation of specifications based on customized checks, and on linguistic rules dynamically defined in the RSL itself.
arXiv Detail & Related papers (2023-12-17T21:39:26Z) - Language Models as a Service: Overview of a New Paradigm and its
Challenges [47.75762014254756]
Some of the most powerful language models currently are proprietary systems, accessible only via (typically restrictive) web or programming.
This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, replicability, reliability, and trustworthiness of LM interfaces.
On the other hand, it serves as a comprehensive resource for existing knowledge on current, major LM, offering a synthesized overview of the licences and capabilities their interfaces offer.
arXiv Detail & Related papers (2023-09-28T16:29:52Z) - Natural Language Processing for Requirements Formalization: How to
Derive New Approaches? [0.32885740436059047]
We present and discuss principal ideas and state-of-the-art methodologies from the field of NLP.
We discuss two different approaches in detail and highlight the iterative development of rule sets.
The presented methods are demonstrated on two industrial use cases from the automotive and railway domains.
arXiv Detail & Related papers (2023-09-23T05:45:19Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - nl2spec: Interactively Translating Unstructured Natural Language to
Temporal Logics with Large Language Models [3.1143846686797314]
We present nl2spec, a framework for applying Large Language Models (LLMs) derive formal specifications from unstructured natural language.
We introduce a new methodology to detect and resolve the inherent ambiguity of system requirements in natural language.
Users iteratively add, delete, and edit these sub-translations to amend erroneous formalizations, which is easier than manually redrafting the entire formalization.
arXiv Detail & Related papers (2023-03-08T20:08:53Z) - Natural Language Processing for Systems Engineering: Automatic
Generation of Systems Modelling Language Diagrams [0.10312968200748115]
An approach is proposed to assist systems engineers in the automatic generation of systems diagrams from unstructured natural language text.
The intention is to provide the users with a more standardised, comprehensive and automated starting point onto which subsequently refine and adapt the diagrams according to their needs.
arXiv Detail & Related papers (2022-08-09T19:20:33Z) - Quality Assurance of Generative Dialog Models in an Evolving
Conversational Agent Used for Swedish Language Practice [59.705062519344]
One proposed solution involves AI-enabled conversational agents for person-centered interactive language practice.
We present results from ongoing action research targeting quality assurance of proprietary generative dialog models trained for virtual job interviews.
arXiv Detail & Related papers (2022-03-29T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.