A Rapid Prototyping Language Workbench for Textual DSLs based on Xtext:
Vision and Progress
- URL: http://arxiv.org/abs/2309.04347v1
- Date: Fri, 8 Sep 2023 14:17:00 GMT
- Title: A Rapid Prototyping Language Workbench for Textual DSLs based on Xtext:
Vision and Progress
- Authors: Weixing Zhang, Jan-Philipp Stegh\"ofer, Regina Hebig, Daniel Str\"uber
- Abstract summary: We present our vision for a language workbench that integrates Grammarr's grammar optimization rules to support rapid prototyping and evolution of languages.
It provides a visual configuration of optimization rules and a real-time preview of the effects of grammar optimization.
Our paper discusses the potential and applications of this language workbench, as well as how it fills the gaps in existing language workbenches.
- Score: 0.8534278963977691
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Metamodel-based DSL development in language workbenches like Xtext allows
language engineers to focus more on metamodels and domain concepts rather than
grammar details. However, the grammar generated from metamodels often requires
manual modification, which can be tedious and time-consuming. Especially when
it comes to rapid prototyping and language evolution, the grammar will be
generated repeatedly, this means that language engineers need to repeat such
manual modification back and forth. Previous work introduced GrammarOptimizer,
which automatically improves the generated grammar using optimization rules.
However, the optimization rules need to be configured manually, which lacks
user-friendliness and convenience. In this paper, we present our vision for and
current progress towards a language workbench that integrates
GrammarOptimizer's grammar optimization rules to support rapid prototyping and
evolution of metamodel-based languages. It provides a visual configuration of
optimization rules and a real-time preview of the effects of grammar
optimization to address the limitations of GrammarOptimizer. Furthermore, it
supports the inference of a grammar based on examples from model instances and
offers a selection of language styles. These features aim to enhance the
automation level of metamodel-based DSL development with Xtext and assist
language engineers in iterative development and rapid prototyping. Our paper
discusses the potential and applications of this language workbench, as well as
how it fills the gaps in existing language workbenches.
Related papers
- CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Supporting Meta-model-based Language Evolution and Rapid Prototyping
with Automated Grammar Optimization [0.7812210699650152]
We present Grammarr, an approach for optimizing generated grammars in the context of meta-model-based language evolution.
G grammar optimization rules were extracted from a comparison of generated and existing, expert-created grammars.
arXiv Detail & Related papers (2024-01-30T18:03:45Z) - Accelerating Multilingual Language Model for Excessively Tokenized Languages [3.5570874721859016]
tokenizers in large language models (LLMs) often fragment a text into character or Unicode-level tokens in non-Roman alphabetic languages.
We introduce a simple yet effective framework to accelerate text generation in such languages.
arXiv Detail & Related papers (2024-01-19T12:26:57Z) - Towards Automated Support for the Co-Evolution of Meta-Models and
Grammars [0.0]
We focus on a model-driven engineering (MDE) approach based on meta-models to develop textual languages.
In this thesis, we propose an approach that can support the co-evolution of meta-models and grammars.
arXiv Detail & Related papers (2023-12-10T23:34:07Z) - Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization [103.70896967077294]
This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model.
Our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model.
Experimental results on various tasks demonstrate that the language agents improve over time.
arXiv Detail & Related papers (2023-08-04T06:14:23Z) - Bootstrapping Vision-Language Learning with Decoupled Language
Pre-training [46.570154746311935]
We present a novel methodology aimed at optimizing the application of frozen large language models (LLMs) for resource-intensive vision-language pre-training.
Our approach diverges by concentrating on the language component, specifically identifying the optimal prompts to align with visual features.
Our framework is modality-agnostic and flexible in terms of architectural design, as validated by its successful application in a video learning task.
arXiv Detail & Related papers (2023-07-13T21:08:15Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - Vision-Language Pre-Training for Boosting Scene Text Detectors [57.08046351495244]
We specifically adapt vision-language joint learning for scene text detection.
We propose to learn contextualized, joint representations through vision-language pre-training.
The pre-trained model is able to produce more informative representations with richer semantics.
arXiv Detail & Related papers (2022-04-29T03:53:54Z) - Improving Text Auto-Completion with Next Phrase Prediction [9.385387026783103]
Our strategy includes a novel self-supervised training objective called Next Phrase Prediction (NPP)
Preliminary experiments have shown that our approach is able to outperform the baselines in auto-completion for email and academic writing domains.
arXiv Detail & Related papers (2021-09-15T04:26:15Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.