Enterprise Architecture Model Transformation Engine
- URL: http://arxiv.org/abs/2108.13169v1
- Date: Sun, 15 Aug 2021 11:10:42 GMT
- Title: Enterprise Architecture Model Transformation Engine
- Authors: Erik Heiland, Peter Hillmann, Andreas Karcher
- Abstract summary: This paper presents a transformation engine to convert enterprise architecture models between several languages.
The transformation process is defined by various pattern matching techniques using a rule-based description language.
It uses set theory and first-order logic for an intuitive description as a basis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With increasing linkage within value chains, the IT systems of different
companies are also being connected with each other. This enables the
integration of services within the movement of Industry 4.0 in order to improve
the quality and performance of the processes. Enterprise architecture models
form the basis for this with a better buisness IT-alignment. However, the
heterogeneity of the modeling frameworks and description languages makes a
concatenation considerably difficult, especially differences in syntax,
semantic and relations. Therefore, this paper presents a transformation engine
to convert enterprise architecture models between several languages. We
developed the first generic translation approach that is free of specific
meta-modeling, which is flexible adaptable to arbitrary modeling languages. The
transformation process is defined by various pattern matching techniques using
a rule-based description language. It uses set theory and first-order logic for
an intuitive description as a basis. The concept is practical evaluated using
an example in the area of a large German IT-service provider. Anyhow, the
approach is applicable between a wide range of enterprise architecture
frameworks.
Related papers
- UQE: A Query Engine for Unstructured Databases [71.49289088592842]
We investigate the potential of Large Language Models to enable unstructured data analytics.
We propose a new Universal Query Engine (UQE) that directly interrogates and draws insights from unstructured data collections.
arXiv Detail & Related papers (2024-06-23T06:58:55Z) - SymbolicAI: A framework for logic-based approaches combining generative models and solvers [9.841285581456722]
We introduce SymbolicAI, a versatile and modular framework employing a logic-based approach to concept learning and flow management in generative processes.
We treat large language models (LLMs) as semantic solvers that execute tasks based on both natural and formal language instructions.
arXiv Detail & Related papers (2024-02-01T18:50:50Z) - Towards More Unified In-context Visual Understanding [74.55332581979292]
We present a new ICL framework for visual understanding with multi-modal output enabled.
First, we quantize and embed both text and visual prompt into a unified representational space.
Then a decoder-only sparse transformer architecture is employed to perform generative modeling on them.
arXiv Detail & Related papers (2023-12-05T06:02:21Z) - Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational
AutoEncoders [5.037881619912574]
In this paper, we investigate latent space separation methods for structural syntactic injection in Transformer-based VAEs.
Specifically, we explore how syntactic structures can be leveraged in the encoding stage through the integration of graph-based and sequential models.
Our empirical evaluation, carried out on natural language sentences and mathematical expressions, reveals that the proposed end-to-end VAE architecture can result in a better overall organisation of the latent space.
arXiv Detail & Related papers (2023-11-14T22:47:23Z) - Enterprise Model Library for Business-IT-Alignment [0.0]
This work is the reference architecture of a repository for models with function of reuse.
It includes the design of the data structure for filing, the processes for administration and possibilities for usage.
arXiv Detail & Related papers (2022-11-21T11:36:54Z) - Autoregressive Structured Prediction with Language Models [73.11519625765301]
We describe an approach to model structures as sequences of actions in an autoregressive manner with PLMs.
Our approach achieves the new state-of-the-art on all the structured prediction tasks we looked at.
arXiv Detail & Related papers (2022-10-26T13:27:26Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z) - GroupBERT: Enhanced Transformer Architecture with Efficient Grouped
Structures [57.46093180685175]
We demonstrate a set of modifications to the structure of a Transformer layer, producing a more efficient architecture.
We add a convolutional module to complement the self-attention module, decoupling the learning of local and global interactions.
We apply the resulting architecture to language representation learning and demonstrate its superior performance compared to BERT models of different scales.
arXiv Detail & Related papers (2021-06-10T15:41:53Z) - E.T.: Entity-Transformers. Coreference augmented Neural Language Model
for richer mention representations via Entity-Transformer blocks [3.42658286826597]
We present an extension over the Transformer-block architecture used in neural language models, specifically in GPT2.
Our model, GPT2E, extends the Transformer layers architecture of GPT2 to Entity-Transformers, an architecture designed to handle coreference information when present.
arXiv Detail & Related papers (2020-11-10T22:28:00Z) - Neural Entity Linking: A Survey of Models Based on Deep Learning [82.43751915717225]
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015.
Its goal is to systemize design features of neural entity linking systems and compare their performance to the remarkable classic methods on common benchmarks.
The survey touches on applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models.
arXiv Detail & Related papers (2020-05-31T18:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.