EATXT: A textual concrete syntax for EAST-ADL
- URL: http://arxiv.org/abs/2407.09895v1
- Date: Sat, 13 Jul 2024 14:05:21 GMT
- Title: EATXT: A textual concrete syntax for EAST-ADL
- Authors: Weixing Zhang, Jörg Holtmann, Daniel Strüber, Jan-Philipp Steghöfer,
- Abstract summary: This paper introduces EATXT, an editor for automotive architecture modeling with EAST-ADL.
The EATXT editor is based on Xtext and provides basic and advanced features, such as improved content-assist and serialization.
We present the editor features and architecture, the implementation approach, and previous use of EATXT in research.
- Score: 5.34855193340848
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Blended modeling is an approach that enables users to interact with a model via multiple notations. In this context, there is a growing need for open-source industry-grade exemplars of languages with available language engineering artifacts, in particular, editors and notations for supporting the creation of models based on a single metamodel in different representations (e.g., textual, graphical, and tabular ones). These exemplars can support the development of advanced solutions to address the practical challenges posed by blended modeling requirements. As one such exemplar, this paper introduces EATXT, a textual concrete syntax for automotive architecture modeling with EAST-ADL, developed in cooperation with an industry partner in the automotive domain. The EATXT editor is based on Xtext and provides basic and advanced features, such as an improved content-assist and serialization specifically addressing blended modeling requirements. We present the editor features and architecture, the implementation approach, and previous use of EATXT in research. The EATXT editor is publicly available, rendering it a valuable resource for language developers.
Related papers
- Raw Text is All you Need: Knowledge-intensive Multi-turn Instruction Tuning for Large Language Model [25.459787361454353]
We present a novel framework named R2S that leverages the CoD-Chain of Dialogue logic to guide large language models (LLMs) in generating knowledge-intensive multi-turn dialogues for instruction tuning.
By integrating raw documents from both open-source datasets and domain-specific web-crawled documents into a benchmark K-BENCH, we cover diverse areas such as Wikipedia (English), Science (Chinese), and Artifacts (Chinese)
arXiv Detail & Related papers (2024-07-03T12:04:10Z) - MetaDesigner: Advancing Artistic Typography through AI-Driven, User-Centric, and Multilingual WordArt Synthesis [65.78359025027457]
MetaDesigner revolutionizes artistic typography by leveraging the strengths of Large Language Models (LLMs) to drive a design paradigm centered around user engagement.
A comprehensive feedback mechanism harnesses insights from multimodal models and user evaluations to refine and enhance the design process iteratively.
Empirical validations highlight MetaDesigner's capability to effectively serve diverse WordArt applications, consistently producing aesthetically appealing and context-sensitive results.
arXiv Detail & Related papers (2024-06-28T11:58:26Z) - AnyTrans: Translate AnyText in the Image with Large Scale Models [88.5887934499388]
This paper introduces AnyTrans, an all-encompassing framework for the task-Translate AnyText in the Image (TATI)
Our framework incorporates contextual cues from both textual and visual elements during translation.
We have meticulously compiled a test dataset called MTIT6, which consists of multilingual text image translation data from six language pairs.
arXiv Detail & Related papers (2024-06-17T11:37:48Z) - Unitxt: Flexible, Shareable and Reusable Data Preparation and Evaluation
for Generative AI [15.220987187105607]
Unitxt is an innovative library for customizable textual data preparation and evaluation tailored to generative language models.
Unitxt integrates with common libraries like HFace and LM-eval-harness, enabling easy customization and sharing between practitioners.
Beyond being a tool, Unitxt is a community-driven platform, empowering users to build, share, and advance their pipelines.
arXiv Detail & Related papers (2024-01-25T08:57:33Z) - Technical Report: Unresolved Challenges and Potential Features in EATXT [0.0]
This document is a technical report that describes potential advanced features that could be added to EATXT.
The purpose of this report is to share our understanding of the relevant technical challenges and to assist potentially interested peers.
arXiv Detail & Related papers (2023-12-15T22:45:17Z) - Towards Automated Support for the Co-Evolution of Meta-Models and
Grammars [0.0]
We focus on a model-driven engineering (MDE) approach based on meta-models to develop textual languages.
In this thesis, we propose an approach that can support the co-evolution of meta-models and grammars.
arXiv Detail & Related papers (2023-12-10T23:34:07Z) - TextDiffuser-2: Unleashing the Power of Language Models for Text
Rendering [118.30923824681642]
TextDiffuser-2 aims to unleash the power of language models for text rendering.
We utilize the language model within the diffusion model to encode the position and texts at the line level.
We conduct extensive experiments and incorporate user studies involving human participants as well as GPT-4V.
arXiv Detail & Related papers (2023-11-28T04:02:40Z) - Conceptual Model Interpreter for Large Language Models [0.0]
This paper applies code generation and interpretation to conceptual models.
The concept and prototype of a conceptual model interpreter is explored.
The results indicate the possibility of modeling iteratively in a conversational fashion.
arXiv Detail & Related papers (2023-11-11T09:41:37Z) - TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild [102.93338424976959]
We introduce TextBind, an almost annotation-free framework for empowering larger language models with the multi-turn interleaved instruction-following capabilities.
Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model.
To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models.
arXiv Detail & Related papers (2023-09-14T15:34:01Z) - Generating Images with Multimodal Language Models [78.6660334861137]
We propose a method to fuse frozen text-only large language models with pre-trained image encoder and decoder models.
Our model demonstrates a wide suite of multimodal capabilities: image retrieval, novel image generation, and multimodal dialogue.
arXiv Detail & Related papers (2023-05-26T19:22:03Z) - VX2TEXT: End-to-End Learning of Video-Based Text Generation From
Multimodal Inputs [103.99315770490163]
We present a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio.
Experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks.
arXiv Detail & Related papers (2021-01-28T15:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.