Towards Automated Support for the Co-Evolution of Meta-Models and
Grammars
- URL: http://arxiv.org/abs/2312.07582v1
- Date: Sun, 10 Dec 2023 23:34:07 GMT
- Title: Towards Automated Support for the Co-Evolution of Meta-Models and
Grammars
- Authors: Weixing Zhang
- Abstract summary: We focus on a model-driven engineering (MDE) approach based on meta-models to develop textual languages.
In this thesis, we propose an approach that can support the co-evolution of meta-models and grammars.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Blended modeling is an emerging paradigm involving seamless interaction
between multiple notations for the same underlying modeling language. We focus
on a model-driven engineering (MDE) approach based on meta-models to develop
textual languages to improve the blended modeling capabilities of modeling
tools. In this thesis, we propose an approach that can support the co-evolution
of meta-models and grammars as language engineers develop textual languages in
a meta-model-based MDE setting. Firstly, we comprehensively report on the
challenges and limitations of modeling tools that support blended modeling, as
well as opportunities to improve them. Second, we demonstrate how language
engineers can extend Xtext's generator capabilities according to their needs.
Third, we propose a semi-automatic method to transform a language with a
generated grammar into a Python-style language. Finally, we provide a solution
(i.e., GrammarOptimizer) that can support rapid prototyping of languages in
different styles and the co-evolution of meta-models and grammars of evolving
languages.
Related papers
- The Sociolinguistic Foundations of Language Modeling [34.02231580843069]
We argue that large language models are inherently models of varieties of language.
We discuss how this perspective can help address five basic challenges in language modeling.
arXiv Detail & Related papers (2024-07-12T13:12:55Z) - A Framework to Model ML Engineering Processes [1.9744907811058787]
Development of Machine Learning (ML) based systems is complex and requires multidisciplinary teams with diverse skill sets.
Current process modeling languages are not suitable for describing the development of such systems.
We introduce a framework for modeling ML-based software development processes, built around a domain-specific language.
arXiv Detail & Related papers (2024-04-29T09:17:36Z) - CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models [59.91221728187576]
This paper introduces the CMU Linguistic Linguistic Backend, an open-source framework that simplifies model deployment and continuous human-in-the-loop fine-tuning of NLP models.
CMULAB enables users to leverage the power of multilingual models to quickly adapt and extend existing tools for speech recognition, OCR, translation, and syntactic analysis to new languages.
arXiv Detail & Related papers (2024-04-03T02:21:46Z) - Lemur: Harmonizing Natural Language and Code for Language Agents [105.43564788499901]
We introduce Lemur and Lemur-Chat, open-source language models optimized for both natural language and coding capabilities.
Our models achieve state-of-the-art averaged performance across diverse text and coding benchmarks.
The harmonization between natural and programming languages enables Lemur-Chat to significantly narrow the gap with proprietary models on agent abilities.
arXiv Detail & Related papers (2023-10-10T17:57:45Z) - TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wild [102.93338424976959]
We introduce TextBind, an almost annotation-free framework for empowering larger language models with the multi-turn interleaved instruction-following capabilities.
Our approach requires only image-caption pairs and generates multi-turn multimodal instruction-response conversations from a language model.
To accommodate interleaved image-text inputs and outputs, we devise MIM, a language model-centric architecture that seamlessly integrates image encoder and decoder models.
arXiv Detail & Related papers (2023-09-14T15:34:01Z) - A Rapid Prototyping Language Workbench for Textual DSLs based on Xtext:
Vision and Progress [0.8534278963977691]
We present our vision for a language workbench that integrates Grammarr's grammar optimization rules to support rapid prototyping and evolution of languages.
It provides a visual configuration of optimization rules and a real-time preview of the effects of grammar optimization.
Our paper discusses the potential and applications of this language workbench, as well as how it fills the gaps in existing language workbenches.
arXiv Detail & Related papers (2023-09-08T14:17:00Z) - PaLM-E: An Embodied Multimodal Language Model [101.29116156731762]
We propose embodied language models to incorporate real-world continuous sensor modalities into language models.
We train these encodings end-to-end, in conjunction with a pre-trained large language model, for multiple embodied tasks.
Our largest model, PaLM-E-562B with 562B parameters, is a visual-language generalist with state-of-the-art performance on OK-VQA.
arXiv Detail & Related papers (2023-03-06T18:58:06Z) - Language Model Cascades [72.18809575261498]
Repeated interactions at test-time with a single model, or the composition of multiple models together, further expands capabilities.
Cases with control flow and dynamic structure require techniques from probabilistic programming.
We formalize several existing techniques from this perspective, including scratchpads / chain of thought, verifiers, STaR, selection-inference, and tool use.
arXiv Detail & Related papers (2022-07-21T07:35:18Z) - Language Models are General-Purpose Interfaces [109.45478241369655]
We propose to use language models as a general-purpose interface to various foundation models.
A collection of pretrained encoders perceive diverse modalities (such as vision, and language)
We propose a semi-causal language modeling objective to jointly pretrain the interface and the modular encoders.
arXiv Detail & Related papers (2022-06-13T17:34:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.