Towards the interoperability of low-code platforms
- URL: http://arxiv.org/abs/2412.05075v1
- Date: Fri, 06 Dec 2024 14:33:34 GMT
- Title: Towards the interoperability of low-code platforms
- Authors: Iván Alfonso, Aaron Conrardy, Jordi Cabot,
- Abstract summary: Low-code platforms (LCPs) are becoming popular across various industries.
Among them, vendor lock-in is a major concern, especially considering the lack of interoperability between these platforms.
This work proposes an approach to improve the interoperability of LCPs by (semi)automatically migrating models specified in one platform to another one.
- Score: 1.7450893625541586
- License:
- Abstract: With the promise of accelerating software development, low-code platforms (LCPs) are becoming popular across various industries. Nevertheless, there are still barriers hindering their adoption. Among them, vendor lock-in is a major concern, especially considering the lack of interoperability between these platforms. Typically, after modeling an application in one LCP, migrating to another requires starting from scratch remodeling everything (the data model, the graphical user interface, workflows, etc.), in the new platform. To overcome this situation, this work proposes an approach to improve the interoperability of LCPs by (semi)automatically migrating models specified in one platform to another one. The concrete migration path depends on the capabilities of the source and target tools. We first analyze popular LCPs, characterize their import and export alternatives and define transformations between those data formats when available. This is then complemented with an LLM-based solution, where image recognition features of large language models are employed to migrate models based on a simple image export of the model at hand. The full pipelines are implemented on top of the BESSER modeling framework that acts as a pivot representation between the tools.
Related papers
- Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - The Compressor-Retriever Architecture for Language Model OS [20.56093501980724]
This paper explores the concept of using a language model as the core component of an operating system (OS)
A key challenge in realizing such an LM OS is managing the life-long context and ensuring statefulness across sessions.
We introduce compressor-retriever, a model-agnostic architecture designed for life-long context management.
arXiv Detail & Related papers (2024-09-02T23:28:15Z) - Unlocking the Potential of Model Merging for Low-Resource Languages [66.7716891808697]
Adapting large language models to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT)
We propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training.
Experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data.
arXiv Detail & Related papers (2024-07-04T15:14:17Z) - Transformer Architecture for NetsDB [0.0]
We create an end-to-end implementation of a transformer for deep learning model serving in NetsDB.
We load out weights from our model for distributed processing, deployment, and efficient inferencing.
arXiv Detail & Related papers (2024-05-08T04:38:36Z) - From Image to UML: First Results of Image Based UML Diagram Generation Using LLMs [1.961305559606562]
In software engineering processes, systems are first specified using a modeling language.
Large Language Models (LLM) are used to generate the formal representation of (UML) models from a given drawing.
More specifically, we have evaluated the capabilities of different LLMs to convert images of class diagrams into the actual models represented in the images.
arXiv Detail & Related papers (2024-04-17T13:33:11Z) - Emerging Platforms Meet Emerging LLMs: A Year-Long Journey of Top-Down Development [20.873143073842705]
We introduce TapML, a top-down approach and tooling designed to streamline the deployment of machine learning systems on diverse platforms.
Unlike traditional bottom-up methods, TapML automates unit testing and adopts a migration-based strategy for gradually offloading model computations.
TapML was developed and applied through a year-long, real-world effort that successfully deployed significant emerging models and platforms.
arXiv Detail & Related papers (2024-04-14T06:09:35Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Recommender AI Agent: Integrating Large Language Models for Interactive
Recommendations [53.76682562935373]
We introduce an efficient framework called textbfInteRecAgent, which employs LLMs as the brain and recommender models as tools.
InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.
arXiv Detail & Related papers (2023-08-31T07:36:44Z) - Incremental Model Transformations with Triple Graph Grammars for
Multi-version Models [1.6371451481715191]
We propose a technique for handling the transformation of multiple versions of a source model into corresponding versions of a target model.
Our approach is based on the well-known formalism of triple graph grammars and the encoding of model version histories called multi-version models.
arXiv Detail & Related papers (2023-07-05T08:26:18Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.