Incremental Model Transformations with Triple Graph Grammars for
Multi-version Models
- URL: http://arxiv.org/abs/2307.02105v2
- Date: Fri, 7 Jul 2023 12:49:21 GMT
- Title: Incremental Model Transformations with Triple Graph Grammars for
Multi-version Models
- Authors: Matthias Barkowsky and Holger Giese
- Abstract summary: We propose a technique for handling the transformation of multiple versions of a source model into corresponding versions of a target model.
Our approach is based on the well-known formalism of triple graph grammars and the encoding of model version histories called multi-version models.
- Score: 1.6371451481715191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Like conventional software projects, projects in model-driven software
engineering require adequate management of multiple versions of development
artifacts, importantly allowing living with temporary inconsistencies. In
previous work, multi-version models for model-driven software engineering have
been introduced, which allow checking well-formedness and finding merge
conflicts for multiple versions of a model at once. However, also for
multi-version models, situations where different artifacts, that is, different
models, are linked via automatic model transformations have to be handled.
In this paper, we propose a technique for jointly handling the transformation
of multiple versions of a source model into corresponding versions of a target
model, which enables the use of a more compact representation that may afford
improved execution time of both the transformation and further analysis
operations. Our approach is based on the well-known formalism of triple graph
grammars and the aforementioned encoding of model version histories called
multi-version models. In addition to batch transformation of an entire model
version history, the technique also covers incremental synchronization of
changes in the framework of multi-version models.
We show the correctness of our approach with respect to the standard
semantics of triple graph grammars and conduct an empirical evaluation to
investigate the performance of our technique regarding execution time and
memory consumption. Our results indicate that the proposed technique affords
lower memory consumption and may improve execution time for batch
transformation of large version histories, but can also come with computational
overhead in unfavorable cases.
Related papers
- A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - HM3: Hierarchical Multi-Objective Model Merging for Pretrained Models [28.993221775758702]
Model merging is a technique that combines multiple large pretrained models into a single model with enhanced performance and broader task adaptability.
This paper marks a significant advance toward more flexible and comprehensive model merging techniques.
We train policy and value networks using offline sampling of weight vectors, which are then employed for the online optimization of merging strategies.
arXiv Detail & Related papers (2024-09-27T16:31:31Z) - Knowledge Fusion By Evolving Weights of Language Models [5.354527640064584]
This paper examines the approach of integrating multiple models into a unified model.
We propose a knowledge fusion method named Evolver, inspired by evolutionary algorithms.
arXiv Detail & Related papers (2024-06-18T02:12:34Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - FusionBench: A Comprehensive Benchmark of Deep Model Fusion [78.80920533793595]
Deep model fusion is a technique that unifies the predictions or parameters of several deep neural networks into a single model.
FusionBench is the first comprehensive benchmark dedicated to deep model fusion.
arXiv Detail & Related papers (2024-06-05T13:54:28Z) - Merging Text Transformer Models from Different Initializations [7.768975909119287]
We investigate the extent to which separate Transformer minima learn similar features.
We propose a model merging technique to investigate the relationship between these minima in the loss landscape.
Our results show that the minima of these models are less sharp and isolated than previously understood.
arXiv Detail & Related papers (2024-03-01T21:16:29Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Exploring and Evaluating Personalized Models for Code Generation [9.25440316608194]
We evaluate transformer model fine-tuning for personalization.
We consider three key approaches: (i) custom fine-tuning, which allows all the model parameters to be tuned.
We compare these fine-tuning strategies for code generation and discuss the potential generalization and cost benefits of each in various deployment scenarios.
arXiv Detail & Related papers (2022-08-29T23:28:46Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.