DirectDebug: Automated Testing and Debugging of Feature Models
- URL: http://arxiv.org/abs/2102.05949v1
- Date: Thu, 11 Feb 2021 11:22:20 GMT
- Title: DirectDebug: Automated Testing and Debugging of Feature Models
- Authors: Viet-Man Le and Alexander Felfernig and Mathias Uta and David
Benavides and Jos\'e Galindo and Thi Ngoc Trang Tran
- Abstract summary: Variability models (e.g., feature models) are a common way for the representation of variabilities and commonalities of software artifacts.
Complex and often large-scale feature models can become faulty, i.e., do not represent the expected variability properties of the underlying software artifact.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Variability models (e.g., feature models) are a common way for the
representation of variabilities and commonalities of software artifacts. Such
models can be translated to a logical representation and thus allow different
operations for quality assurance and other types of model property analysis.
Specifically, complex and often large-scale feature models can become faulty,
i.e., do not represent the expected variability properties of the underlying
software artifact. In this paper, we introduce DirectDebug which is a direct
diagnosis approach to the automated testing and debugging of variability
models. The algorithm helps software engineers by supporting an automated
identification of faulty constraints responsible for an unintended behavior of
a variability model. This approach can significantly decrease development and
maintenance efforts for such models.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Complementary Learning for Real-World Model Failure Detection [15.779651238128562]
We introduce complementary learning, where we use learned characteristics from different training paradigms to detect model errors.
We demonstrate our approach by learning semantic and predictive motion labels in point clouds in a supervised and self-supervised manner.
We perform a large-scale qualitative analysis and present LidarCODA, the first dataset with labeled anomalies in lidar point clouds.
arXiv Detail & Related papers (2024-07-19T13:36:35Z) - Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity
Tracking [53.66999416757543]
We study how fine-tuning affects the internal mechanisms implemented in language models.
Fine-tuning enhances, rather than alters, the mechanistic operation of the model.
arXiv Detail & Related papers (2024-02-22T18:59:24Z) - Monitoring Machine Learning Models: Online Detection of Relevant
Deviations [0.0]
Machine learning models can degrade over time due to changes in data distribution or other factors.
We propose a sequential monitoring scheme to detect relevant changes.
Our research contributes a practical solution for distinguishing between minor fluctuations and meaningful degradations.
arXiv Detail & Related papers (2023-09-26T18:46:37Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - Indeterminacy in Latent Variable Models: Characterization and Strong
Identifiability [3.959606869996233]
We construct a theoretical framework for analyzing the indeterminacies of latent variable models.
We then investigate how we might specify strongly identifiable latent variable models.
arXiv Detail & Related papers (2022-06-02T00:01:27Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Anomaly detection in Context-aware Feature Models [1.0660480034605242]
We formalize the anomaly analysis in Context-aware Feature Models.
We show how QBF solvers can be used to detect anomalies without relying on iterative calls to a SAT solver.
arXiv Detail & Related papers (2020-07-28T08:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.