Traceability and Reuse Mechanisms, the most important Properties of
Model Transformation Languages
- URL: http://arxiv.org/abs/2305.06764v1
- Date: Thu, 11 May 2023 12:35:03 GMT
- Title: Traceability and Reuse Mechanisms, the most important Properties of
Model Transformation Languages
- Authors: Stefan H\"oppner, Matthias Tichy
- Abstract summary: We aim to quantitatively asses the interview results to confirm or reject the effects posed by different factors.
Results show that the Tracing and Reuse Mechanisms are most important overall.
- Score: 1.4685355149711299
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dedicated model transformation languages are claimed to provide many benefits
over the use of general purpose languages for developing model transformations.
However, the actual advantages associated with the use of MTLs are poorly
understood empirically. There is little knowledge and empirical assessment
about what advantages and disadvantages hold and where they originate from. In
a prior interview study, we elicited expert opinions on what advantages result
from what factors and a number of factors that moderate the influence. We aim
to quantitatively asses the interview results to confirm or reject the effects
posed by different factors. We intend to gain insights into how valuable
different factors are so that future studies can draw on these data for
designing targeted and relevant studies. We gather data on the factors and
quality attributes using an online survey. To analyse the data, we use
universal structure modelling based on a structure model. We use significance
values and path coefficients produced bz USM for each hypothesised
interdependence to confirm or reject correlation and to weigh the strength of
influence present. We analyzed 113 responses. The results show that the Tracing
and Reuse Mechanisms are most important overall. Though the observed effects
were generally 10 times lower than anticipated. Additionally, we found that a
more nuanced view of moderation effects is warranted. Their moderating
influence differed significantly between the different influences, with the
strongest effects being 1000 times higher than the weakest. The empirical
assessment of MTLs is a complex topic that cannot be solved by looking at a
single stand-alone factor. Our results provide clear indication that evaluation
should consider transformations of different sizes and use-cases. Language
development should focus on providing transformation specific reuse mechanisms .
Related papers
- Do Influence Functions Work on Large Language Models? [10.463762448166714]
Influence functions aim to quantify the impact of individual training data points on a model's predictions.
We evaluate influence functions across multiple tasks and find that they consistently perform poorly in most settings.
arXiv Detail & Related papers (2024-09-30T06:50:18Z) - "Why" Has the Least Side Effect on Model Editing [25.67779910446609]
This paper delves into a critical factor-question type-by categorizing model editing questions.
Our findings reveal that the extent of performance degradation varies significantly across different question types.
We also examine the impact of batch size on side effects, discovering that increasing the batch size can mitigate performance drops.
arXiv Detail & Related papers (2024-09-27T12:05:12Z) - Discovery of the Hidden World with Large Language Models [95.58823685009727]
This paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap.
LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data.
COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - A Causal Framework for Decomposing Spurious Variations [68.12191782657437]
We develop tools for decomposing spurious variations in Markovian and Semi-Markovian models.
We prove the first results that allow a non-parametric decomposition of spurious effects.
The described approach has several applications, ranging from explainable and fair AI to questions in epidemiology and medicine.
arXiv Detail & Related papers (2023-06-08T09:40:28Z) - A Notion of Feature Importance by Decorrelation and Detection of Trends
by Random Forest Regression [1.675857332621569]
We introduce a novel notion of feature importance based on the well-studied Gram-Schmidt decorrelation method.
We propose two estimators for identifying trends in the data using random forest regression.
arXiv Detail & Related papers (2023-03-02T11:01:49Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - On the Importance of Data Size in Probing Fine-tuned Models [18.69409646532038]
We show that the extent of encoded linguistic knowledge depends on the number of fine-tuning samples.
We show through a set of experiments that fine-tuning data size affects the recoverability of the changes made to the model's linguistic knowledge.
arXiv Detail & Related papers (2022-03-17T21:45:17Z) - On Shapley Credit Allocation for Interpretability [1.52292571922932]
We emphasize the importance of asking the right question when interpreting the decisions of a learning model.
This paper quantifies feature relevance by weaving different natures of interpretations together with different measures as characteristic functions for Shapley symmetrization.
arXiv Detail & Related papers (2020-12-10T08:25:32Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z) - CausalVAE: Structured Causal Disentanglement in Variational Autoencoder [52.139696854386976]
The framework of variational autoencoder (VAE) is commonly used to disentangle independent factors from observations.
We propose a new VAE based framework named CausalVAE, which includes a Causal Layer to transform independent factors into causal endogenous ones.
Results show that the causal representations learned by CausalVAE are semantically interpretable, and their causal relationship as a Directed Acyclic Graph (DAG) is identified with good accuracy.
arXiv Detail & Related papers (2020-04-18T20:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.