Mining Frequent Structures in Conceptual Models
- URL: http://arxiv.org/abs/2406.07129v1
- Date: Tue, 11 Jun 2024 10:24:02 GMT
- Title: Mining Frequent Structures in Conceptual Models
- Authors: Mattia Fumagalli, Tiago Prince Sales, Pedro Paulo F. Barcelos, Giovanni Micale, Vadim Zaytsev, Diego Calvanese, Giancarlo Guizzardi,
- Abstract summary: We propose a general approach to the problem of discovering frequent structures in conceptual modeling languages.
We use the combination of a frequent subgraph mining algorithm and graph manipulation techniques.
The primary objective is to offer a support facility for language engineers.
- Score: 2.841785306638839
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The problem of using structured methods to represent knowledge is well-known in conceptual modeling and has been studied for many years. It has been proven that adopting modeling patterns represents an effective structural method. Patterns are, indeed, generalizable recurrent structures that can be exploited as solutions to design problems. They aid in understanding and improving the process of creating models. The undeniable value of using patterns in conceptual modeling was demonstrated in several experimental studies. However, discovering patterns in conceptual models is widely recognized as a highly complex task and a systematic solution to pattern identification is currently lacking. In this paper, we propose a general approach to the problem of discovering frequent structures, as they occur in conceptual modeling languages. As proof of concept for our scientific contribution, we provide an implementation of the approach, by focusing on UML class diagrams, in particular OntoUML models. This implementation comprises an exploratory tool, which, through the combination of a frequent subgraph mining algorithm and graph manipulation techniques, can process multiple conceptual models and discover recurrent structures according to multiple criteria. The primary objective is to offer a support facility for language engineers. This can be employed to leverage both good and bad modeling practices, to evolve and maintain the conceptual modeling language, and to promote the reuse of encoded experience in designing better models with the given language.
Related papers
- A Concept-Centric Approach to Multi-Modality Learning [3.828996378105142]
We introduce a new multi-modality learning framework to create a more efficient AI system.
Our framework achieves on par with benchmark models while demonstrating more efficient learning curves.
arXiv Detail & Related papers (2024-12-18T13:40:21Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models [80.32412260877628]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Has Your Pretrained Model Improved? A Multi-head Posterior Based
Approach [25.927323251675386]
We leverage the meta-features associated with each entity as a source of worldly knowledge and employ entity representations from the models.
We propose using the consistency between these representations and the meta-features as a metric for evaluating pre-trained models.
Our method's effectiveness is demonstrated across various domains, including models with relational datasets, large language models and image models.
arXiv Detail & Related papers (2024-01-02T17:08:26Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - A Meta-Learning Approach to Population-Based Modelling of Structures [0.0]
A major problem of machine-learning approaches in structural dynamics is the frequent lack of structural data.
Inspired by the recently-emerging field of population-based structural health monitoring, this work attempts to create models that are able to transfer knowledge within populations of structures.
The models trained using meta-learning approaches, are able to outperform conventional machine learning methods regarding inference about structures of the population.
arXiv Detail & Related papers (2023-02-15T23:01:59Z) - Language Model Cascades [72.18809575261498]
Repeated interactions at test-time with a single model, or the composition of multiple models together, further expands capabilities.
Cases with control flow and dynamic structure require techniques from probabilistic programming.
We formalize several existing techniques from this perspective, including scratchpads / chain of thought, verifiers, STaR, selection-inference, and tool use.
arXiv Detail & Related papers (2022-07-21T07:35:18Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Artefact Retrieval: Overview of NLP Models with Knowledge Base Access [18.098224374478598]
This paper systematically describes the typology of artefacts (items retrieved from a knowledge base), retrieval mechanisms and the way these artefacts are fused into the model.
Most of the focus is given to language models, though we also show how question answering, fact-checking and dialogue models fit into this system as well.
arXiv Detail & Related papers (2022-01-24T13:15:33Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Universal Representation Learning of Knowledge Bases by Jointly
Embedding Instances and Ontological Concepts [39.99087114075884]
We propose a novel two-view KG embedding model, JOIE, with the goal to produce better knowledge embedding.
JOIE employs cross-view and intra-view modeling that learn on multiple facets of the knowledge base.
Our model is trained on large-scale knowledge bases that consist of massive instances and their corresponding ontological concepts connected via a (small) set of cross-view links.
arXiv Detail & Related papers (2021-03-15T03:24:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.