Recommending Metamodel Concepts during Modeling Activities with
Pre-Trained Language Models
- URL: http://arxiv.org/abs/2104.01642v1
- Date: Sun, 4 Apr 2021 16:29:10 GMT
- Title: Recommending Metamodel Concepts during Modeling Activities with
Pre-Trained Language Models
- Authors: Martin Weyssow, Houari Sahraoui, Eugene Syriani
- Abstract summary: We propose an approach to assist a modeler in the design of a metamodel by recommending relevant domain concepts in several modeling scenarios.
Our approach does not require to extract knowledge from the domain or to hand-design completion rules.
We evaluate our approach on a test set containing 166 metamodels, unseen during the model training, with more than 5000 test samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of conceptually sound metamodels that embody proper semantics in
relation to the application domain is particularly tedious in Model-Driven
Engineering. As metamodels define complex relationships between domain
concepts, it is crucial for a modeler to define these concepts thoroughly while
being consistent with respect to the application domain. We propose an approach
to assist a modeler in the design of a metamodel by recommending relevant
domain concepts in several modeling scenarios. Our approach does not require to
extract knowledge from the domain or to hand-design completion rules. Instead,
we design a fully data-driven approach using a deep learning model that is able
to abstract domain concepts by learning from both structural and lexical
metamodel properties in a corpus of thousands of independent metamodels. We
evaluate our approach on a test set containing 166 metamodels, unseen during
the model training, with more than 5000 test samples. Our preliminary results
show that the trained model is able to provide accurate top-$5$ lists of
relevant recommendations for concept renaming scenarios. Although promising,
the results are less compelling for the scenario of the iterative construction
of the metamodel, in part because of the conservative strategy we use to
evaluate the recommendations.
Related papers
- Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - Aligning Models with Their Realization through Model-based Systems Engineering [0.0]
We propose a method for aligning models with their realization through the application of model-based systems engineering.
Our approach facilitates a more seamless integration of models and implementation, fostering enhanced Business-IT alignment.
arXiv Detail & Related papers (2024-06-18T06:50:36Z) - Constraint based Modeling according to Reference Design [0.0]
Reference models in form of best practices are an essential element to ensured knowledge as design for reuse.
We present a generic approach for the formal description of reference models using semantic technologies and their application.
It is possible to use multiple reference models in context of system of system designs.
arXiv Detail & Related papers (2024-06-17T07:41:27Z) - Has Your Pretrained Model Improved? A Multi-head Posterior Based
Approach [25.927323251675386]
We leverage the meta-features associated with each entity as a source of worldly knowledge and employ entity representations from the models.
We propose using the consistency between these representations and the meta-features as a metric for evaluating pre-trained models.
Our method's effectiveness is demonstrated across various domains, including models with relational datasets, large language models and image models.
arXiv Detail & Related papers (2024-01-02T17:08:26Z) - A Framework for Monitoring and Retraining Language Models in Real-World
Applications [3.566775910781198]
continuous model monitoring and model retraining is required in many real-world applications.
There are multiple reasons for retraining, including data or concept drift, which may be reflected on the model performance as monitored by an appropriate metric.
We examine the impact of various retraining decision points on crucial factors, such as model performance and resource utilization, in the context of Multilabel Classification models.
arXiv Detail & Related papers (2023-11-16T14:32:18Z) - ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model
Reuse [59.500060790983994]
This paper introduces ZhiJian, a comprehensive and user-friendly toolbox for model reuse, utilizing the PyTorch backend.
ZhiJian presents a novel paradigm that unifies diverse perspectives on model reuse, encompassing target architecture construction with PTM, tuning target model with PTM, and PTM-based inference.
arXiv Detail & Related papers (2023-08-17T19:12:13Z) - PAMI: partition input and aggregate outputs for model interpretation [69.42924964776766]
In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions.
The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction.
Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions.
arXiv Detail & Related papers (2023-02-07T08:48:34Z) - Minimal Value-Equivalent Partial Models for Scalable and Robust Planning
in Lifelong Reinforcement Learning [56.50123642237106]
Common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment.
We argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios.
We propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-minimal partial models"
arXiv Detail & Related papers (2023-01-24T16:40:01Z) - Re-parameterizing Your Optimizers rather than Architectures [119.08740698936633]
We propose a novel paradigm of incorporating model-specific prior knowledge into Structurals and using them to train generic (simple) models.
As an implementation, we propose a novel methodology to add prior knowledge by modifying the gradients according to a set of model-specific hyper- parameters.
For a simple model trained with a Repr, we focus on a VGG-style plain model and showcase that such a simple model trained with a Repr, which is referred to as Rep-VGG, performs on par with the recent well-designed models.
arXiv Detail & Related papers (2022-05-30T16:55:59Z) - An Ample Approach to Data and Modeling [1.0152838128195467]
We describe a framework for modeling how models can be built that integrates concepts and methods from a wide range of fields.
The reference M* meta model framework is presented, which relies critically in associating whole datasets and respective models in terms of a strict equivalence relation.
Several considerations about how the developed framework can provide insights about data clustering, complexity, collaborative research, deep learning, and creativity are then presented.
arXiv Detail & Related papers (2021-10-05T01:26:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.