Establishing tool support for a concept DSL
- URL: http://arxiv.org/abs/2503.05849v1
- Date: Fri, 07 Mar 2025 09:18:31 GMT
- Title: Establishing tool support for a concept DSL
- Authors: Nikolaj Kühne Jakobsen,
- Abstract summary: This thesis describes Conceptual, a DSL for modeling the behavior of software systems using self-contained and highly reusable units of concepts.<n>The suggested strategy is then implemented with a simple compiler, allowing developers to access and utilize Alloy's existing analysis tools for program reasoning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The quality of software products tends to correlate with the quality of the abstractions adopted early in the design process. Acknowledging this tendency has led to the development of various tools and methodologies for modeling systems thoroughly before implementing them. However, creating effective abstract models of domain problems is difficult, especially if the models are also expected to exhibit qualities such as intuitiveness, being seamlessly integrable with other models, or being easily translatable into code. This thesis describes Conceptual, a DSL for modeling the behavior of software systems using self-contained and highly reusable units of functionally known as concepts. The language's syntax and semantics are formalized based on previous work. Additionally, the thesis proposes a strategy for mapping language constructs from Conceptual into the Alloy modeling language. The suggested strategy is then implemented with a simple compiler, allowing developers to access and utilize Alloy's existing analysis tools for program reasoning. The utility and expressiveness of Conceptual is demonstrated qualitatively through several practical case studies. Using the implemented compiler, a few erroneous specifications are identified in the literature. Moreover, the thesis establishes preliminary tool support in the Visual Studio Code IDE.
Related papers
- Language hooks: a modular framework for augmenting LLM reasoning that decouples tool usage from the model and its prompt [7.096646842716599]
We introduce language hooks, a novel framework for augmenting language models with new capabilities.<n>We benchmark our method against state-of-the-art baselines, find that it outperforms task-aware approaches.
arXiv Detail & Related papers (2024-12-08T15:16:17Z) - Mining Frequent Structures in Conceptual Models [2.625701175074974]
We propose a general approach to the problem of discovering frequent structures in conceptual models.<n>We implement our approach by focusing on two widely-used conceptual modeling languages.<n>This tool can be used to identify both effective and ineffective modeling practices.
arXiv Detail & Related papers (2024-06-11T10:24:02Z) - A Framework to Model ML Engineering Processes [1.9744907811058787]
Development of Machine Learning (ML) based systems is complex and requires multidisciplinary teams with diverse skill sets.
Current process modeling languages are not suitable for describing the development of such systems.
We introduce a framework for modeling ML-based software development processes, built around a domain-specific language.
arXiv Detail & Related papers (2024-04-29T09:17:36Z) - CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models [59.91221728187576]
This paper introduces the CMU Linguistic Linguistic Backend, an open-source framework that simplifies model deployment and continuous human-in-the-loop fine-tuning of NLP models.
CMULAB enables users to leverage the power of multilingual models to quickly adapt and extend existing tools for speech recognition, OCR, translation, and syntactic analysis to new languages.
arXiv Detail & Related papers (2024-04-03T02:21:46Z) - Right or Wrong -- Understanding How Novice Users Write Software Models [0.6445605125467574]
This paper presents an empirical study of over 97,000 models written by novice users trying to learn Alloy.
We investigate how users write both correct and incorrect models in order to produce a comprehensive benchmark for future use.
arXiv Detail & Related papers (2024-02-09T18:56:57Z) - RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models [57.12888828853409]
RAVEN is a model that combines retrieval-augmented masked language modeling and prefix language modeling.
Fusion-in-Context Learning enables the model to leverage more in-context examples without requiring additional training.
Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning.
arXiv Detail & Related papers (2023-08-15T17:59:18Z) - Benchmarking Language Models for Code Syntax Understanding [79.11525961219591]
Pre-trained language models have demonstrated impressive performance in both natural language processing and program understanding.
In this work, we perform the first thorough benchmarking of the state-of-the-art pre-trained models for identifying the syntactic structures of programs.
Our findings point out key limitations of existing pre-training methods for programming languages, and suggest the importance of modeling code syntactic structures.
arXiv Detail & Related papers (2022-10-26T04:47:18Z) - Language Models are General-Purpose Interfaces [109.45478241369655]
We propose to use language models as a general-purpose interface to various foundation models.
A collection of pretrained encoders perceive diverse modalities (such as vision, and language)
We propose a semi-causal language modeling objective to jointly pretrain the interface and the modular encoders.
arXiv Detail & Related papers (2022-06-13T17:34:22Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Designing Machine Learning Toolboxes: Concepts, Principles and Patterns [0.0]
We provide an overview of key patterns in the design of AI modeling toolboxes.
Our analysis can not only explain the design of existing toolboxes, but also guide the development of new ones.
arXiv Detail & Related papers (2021-01-13T08:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.