UVL Sentinel: a tool for parsing and syntactic correction of UVL datasets
- URL: http://arxiv.org/abs/2403.18482v1
- Date: Wed, 27 Mar 2024 11:56:08 GMT
- Title: UVL Sentinel: a tool for parsing and syntactic correction of UVL datasets
- Authors: David Romero-Organvidez, Jose A. Galindo, David Benavides,
- Abstract summary: Feature models have become a de facto standard for representing variability in software product lines.
UVL (Universal Variability Language) is a language which expresses the features, dependencies, and constraints between them.
UVL Sentinel analyzes a dataset of feature models in UVL format, generating error analysis reports, describing those errors and, eventually, a syntactic processing that applies the most common solutions.
- Score: 1.1821195547818244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Feature models have become a de facto standard for representing variability in software product lines. UVL (Universal Variability Language) is a language which expresses the features, dependencies, and constraints between them. This language is written in plain text and follows a syntactic structure that needs to be processed by a parser. This parser is software with specific syntactic rules that the language must comply with to be processed correctly. Researchers have datasets with numerous feature models. The language description form of these feature models is tied to a version of the parser language. When the parser is updated to support new features or correct previous ones, these feature models are often no longer compatible, generating incompatibilities and inconsistency within the dataset. In this paper, we present UVL Sentinel. This tool analyzes a dataset of feature models in UVL format, generating error analysis reports, describing those errors and, eventually, a syntactic processing that applies the most common solutions. This tool can detect the incompatibilities of the feature models of a dataset when the parser is updated and tries to correct the most common syntactic errors, facilitating the management of the dataset and the adaptation of their models to the new version of the parser. Our tool was evaluated using a dataset of 1,479 UVL models from different sources and helped semi-automatically fix 185 warnings and syntax errors.
Related papers
- Strategies for Improving NL-to-FOL Translation with LLMs: Data Generation, Incremental Fine-Tuning, and Verification [9.36179617282876]
We create a high-quality FOL-annotated subset of ProofWriter dataset using GPT-4o.
Our results show state-of-the-art performance for ProofWriter and ProntoQA datasets using ProofFOL on LLaMA-2 and Mistral models.
arXiv Detail & Related papers (2024-09-24T21:24:07Z) - Detecting Errors through Ensembling Prompts (DEEP): An End-to-End LLM Framework for Detecting Factual Errors [11.07539342949602]
We propose an end-to-end framework for detecting factual errors in text summarization.
Our framework uses a diverse set of LLM prompts to identify factual inconsistencies.
We calibrate the ensembled models to produce empirically accurate probabilities that a text is factually consistent or free of hallucination.
arXiv Detail & Related papers (2024-06-18T18:59:37Z) - SUT: Active Defects Probing for Transcompiler Models [24.01532199512389]
We introduce a new metrics for programming language translation and these metrics address basic syntax errors.
Experiments have shown that even powerful models like ChatGPT still make mistakes on these basic unit tests.
arXiv Detail & Related papers (2023-10-22T07:16:02Z) - Towards Fine-Grained Information: Identifying the Type and Location of
Translation Errors [80.22825549235556]
Existing approaches can not synchronously consider error position and type.
We build an FG-TED model to predict the textbf addition and textbfomission errors.
Experiments show that our model can identify both error type and position concurrently, and gives state-of-the-art results.
arXiv Detail & Related papers (2023-02-17T16:20:33Z) - BenchCLAMP: A Benchmark for Evaluating Language Models on Syntactic and
Semantic Parsing [55.058258437125524]
We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Model Parsing.
We benchmark eight language models, including two GPT-3 variants available only through an API.
Our experiments show that encoder-decoder pretrained language models can achieve similar performance or surpass state-of-the-art methods for syntactic and semantic parsing when the model output is constrained to be valid.
arXiv Detail & Related papers (2022-06-21T18:34:11Z) - ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented
Visual Models [102.63817106363597]
We build ELEVATER, the first benchmark to compare and evaluate pre-trained language-augmented visual models.
It consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge.
We will release our toolkit and evaluation platforms for the research community.
arXiv Detail & Related papers (2022-04-19T10:23:42Z) - Zero-Shot Cross-lingual Semantic Parsing [56.95036511882921]
We study cross-lingual semantic parsing as a zero-shot problem without parallel data for 7 test languages.
We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-Logical form paired data.
Our system frames zero-shot parsing as a latent-space alignment problem and finds that pre-trained models can be improved to generate logical forms with minimal cross-lingual transfer penalty.
arXiv Detail & Related papers (2021-04-15T16:08:43Z) - NL-EDIT: Correcting semantic parse errors through natural language
interaction [28.333860779302306]
We present NL-EDIT, a model for interpreting natural language feedback in the interaction context.
We show that NL-EDIT can boost the accuracy of existing text-to-allies by up to 20% with only one turn of correction.
arXiv Detail & Related papers (2021-03-26T15:45:46Z) - Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle [88.65264818967489]
We propose a new syntax-aware language model: Syntactic Ordered Memory (SOM)
The model explicitly models the structure with an incremental and maintains the conditional probability setting of a standard language model.
Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests.
arXiv Detail & Related papers (2020-10-21T17:39:15Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.