Towards the interoperability of low-code platforms
- URL: http://arxiv.org/abs/2412.05075v1
- Date: Fri, 06 Dec 2024 14:33:34 GMT
- Title: Towards the interoperability of low-code platforms
- Authors: Iván Alfonso, Aaron Conrardy, Jordi Cabot,
- Abstract summary: Low-code platforms (LCPs) are becoming popular across various industries.<n>Among them, vendor lock-in is a major concern, especially considering the lack of interoperability between these platforms.<n>This work proposes an approach to improve the interoperability of LCPs by (semi)automatically migrating models specified in one platform to another one.
- Score: 1.7450893625541586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the promise of accelerating software development, low-code platforms (LCPs) are becoming popular across various industries. Nevertheless, there are still barriers hindering their adoption. Among them, vendor lock-in is a major concern, especially considering the lack of interoperability between these platforms. Typically, after modeling an application in one LCP, migrating to another requires starting from scratch remodeling everything (the data model, the graphical user interface, workflows, etc.), in the new platform. To overcome this situation, this work proposes an approach to improve the interoperability of LCPs by (semi)automatically migrating models specified in one platform to another one. The concrete migration path depends on the capabilities of the source and target tools. We first analyze popular LCPs, characterize their import and export alternatives and define transformations between those data formats when available. This is then complemented with an LLM-based solution, where image recognition features of large language models are employed to migrate models based on a simple image export of the model at hand. The full pipelines are implemented on top of the BESSER modeling framework that acts as a pivot representation between the tools.
Related papers
- Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo [90.78001821963008]
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints.
We develop an architecture for controlled LM generation based on sequential Monte Carlo (SMC)
Our system builds on the framework of Lew et al. (2023) and integrates with its language model probabilistic programming language.
arXiv Detail & Related papers (2025-04-17T17:49:40Z) - MCP Bridge: A Lightweight, LLM-Agnostic RESTful Proxy for Model Context Protocol Servers [0.5266869303483376]
MCP Bridge is a lightweight proxy that connects to multiple MCP servers and exposes their capabilities through a unified API.
The system implements a risk-based execution model with three security levels standard execution, confirmation, and Docker isolation while maintaining backward compatibility with standard MCP clients.
arXiv Detail & Related papers (2025-04-11T22:19:48Z) - MLKV: Efficiently Scaling up Large Embedding Model Training with Disk-based Key-Value Storage [22.848456481878568]
This paper presents MLKV, an efficient, reusable data storage framework designed to address the scalability challenges in embedding model training.
In experiments on open-source workloads, MLKV outperforms offloading strategies built on top of industrial-strength key-value stores by 1.6-12.6x.
arXiv Detail & Related papers (2025-04-02T08:57:01Z) - Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - The Compressor-Retriever Architecture for Language Model OS [20.56093501980724]
This paper explores the concept of using a language model as the core component of an operating system (OS)
A key challenge in realizing such an LM OS is managing the life-long context and ensuring statefulness across sessions.
We introduce compressor-retriever, a model-agnostic architecture designed for life-long context management.
arXiv Detail & Related papers (2024-09-02T23:28:15Z) - Unlocking the Potential of Model Merging for Low-Resource Languages [66.7716891808697]
Adapting large language models to new languages typically involves continual pre-training (CT) followed by supervised fine-tuning (SFT)
We propose model merging as an alternative for low-resource languages, combining models with distinct capabilities into a single model without additional training.
Experiments based on Llama-2-7B demonstrate that model merging effectively endows LLMs for low-resource languages with task-solving abilities, outperforming CT-then-SFT in scenarios with extremely scarce data.
arXiv Detail & Related papers (2024-07-04T15:14:17Z) - From Image to UML: First Results of Image Based UML Diagram Generation Using LLMs [1.961305559606562]
In software engineering processes, systems are first specified using a modeling language.
Large Language Models (LLM) are used to generate the formal representation of (UML) models from a given drawing.
More specifically, we have evaluated the capabilities of different LLMs to convert images of class diagrams into the actual models represented in the images.
arXiv Detail & Related papers (2024-04-17T13:33:11Z) - Emerging Platforms Meet Emerging LLMs: A Year-Long Journey of Top-Down Development [20.873143073842705]
We introduce TapML, a top-down approach and tooling designed to streamline the deployment of machine learning systems on diverse platforms.
Unlike traditional bottom-up methods, TapML automates unit testing and adopts a migration-based strategy for gradually offloading model computations.
TapML was developed and applied through a year-long, real-world effort that successfully deployed significant emerging models and platforms.
arXiv Detail & Related papers (2024-04-14T06:09:35Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Switchable Representation Learning Framework with Self-compatibility [50.48336074436792]
We propose a Switchable representation learning Framework with Self-Compatibility (SFSC)
SFSC generates a series of compatible sub-models with different capacities through one training process.
SFSC achieves state-of-the-art performance on the evaluated datasets.
arXiv Detail & Related papers (2022-06-16T16:46:32Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.