Code Swarm: A Code Generation Tool Based on the Automatic Derivation of
Transformation Rule Set
- URL: http://arxiv.org/abs/2312.01524v1
- Date: Sun, 3 Dec 2023 22:47:42 GMT
- Title: Code Swarm: A Code Generation Tool Based on the Automatic Derivation of
Transformation Rule Set
- Authors: Hina Mahmood, Atif Aftab Jilani, Abdul Rauf
- Abstract summary: We introduce a novel tool named Code Swarm, abbreviated as CodS, that automatically generates implementation code from system design models.
Our results indicate that the code generated by CodS is correct and consistent with the input design models.
- Score: 0.7182245711235297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic generation of software code from system design models remains an
actively explored research area for the past several years. A number of tools
are currently available to facilitate and automate the task of generating code
from software models. To the best of our knowledge, existing software tools
rely on an explicitly defined transformation rule set to perform the
model-to-code transformation process. In this paper, we introduce a novel tool
named Code Swarm, abbreviated as CodS, that automatically generates
implementation code from system design models by utilizing a swarm-based
approach. Specifically, CodS is capable of generating Java code from the class
and state models of the software system by making use of the previously solved
model-to-code transformation examples. Our tool enables the designers to
specify behavioural actions in the input models using the Action Specification
Language (ASL). We use an industrial case study of the Elevator Control System
(ECS) to perform the experimental validation of our tool. Our results indicate
that the code generated by CodS is correct and consistent with the input design
models. CodS performs the process of automatic code generation without taking
the explicit transformation rule set or languages metamodels information as
input, which distinguishes it from all the existing automatic code generation
tools.
Related papers
- Development of an automatic modification system for generated programs using ChatGPT [0.12233362977312943]
OpenAI's ChatGPT excels at natural language processing tasks and can also generate source code.
We developed a system that tests the code generated by ChatGPT, automatically corrects it if it is inappropriate, and presents the appropriate code to the user.
arXiv Detail & Related papers (2024-07-10T08:54:23Z) - Granite Code Models: A Family of Open Foundation Models for Code Intelligence [37.946802472358996]
Large Language Models (LLMs) trained on code are revolutionizing the software development process.
LLMs are being integrated into software development environments to improve the productivity of human programmers.
We introduce the Granite series of decoder-only code models for code generative tasks.
arXiv Detail & Related papers (2024-05-07T13:50:40Z) - Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [69.38352966504401]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - Synergy of Large Language Model and Model Driven Engineering for Automated Development of Centralized Vehicular Systems [2.887732304499794]
We present a prototype of a tool leveraging the synergy of model driven engineering (MDE) and Large Language Models (LLM)
The generated code is evaluated in a simulated environment using CARLA simulator connected to an example centralized vehicle architecture, in an emergency brake scenario.
arXiv Detail & Related papers (2024-04-08T13:28:11Z) - Automated Code Editing with Search-Generate-Modify [24.96672652375192]
This paper proposes a hybrid approach to better synthesize code edits by leveraging the power of code search, generation, and modification.
SARGAM is a novel tool designed to mimic a real developer's code editing behavior.
arXiv Detail & Related papers (2023-06-10T17:11:21Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z) - Code Execution with Pre-trained Language Models [88.04688617516827]
Most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures.
We develop a mutation-based data augmentation technique to create a large-scale and realistic Python dataset and task for code execution.
We then present CodeExecutor, a Transformer model that leverages code execution pre-training and curriculum learning to enhance its semantic comprehension.
arXiv Detail & Related papers (2023-05-08T10:00:05Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Contrastive Learning for Source Code with Structural and Functional
Properties [66.10710134948478]
We present BOOST, a novel self-supervised model to focus pre-training based on the characteristics of source code.
We employ automated, structure-guided code transformation algorithms that generate functionally equivalent code that looks drastically different from the original one.
We train our model in a way that brings the functionally equivalent code closer and distinct code further through a contrastive learning objective.
arXiv Detail & Related papers (2021-10-08T02:56:43Z) - CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
Code Understanding and Generation [36.47905744758698]
We present CodeT5, a unified pre-trained encoder-decoder Transformer model that better leverages the code semantics conveyed from the developer-assigned identifiers.
Our model employs a unified framework to seamlessly support both code understanding and generation tasks and allows for multi-task learning.
arXiv Detail & Related papers (2021-09-02T12:21:06Z) - CodeBERT: A Pre-Trained Model for Programming and Natural Languages [117.34242908773061]
CodeBERT is a pre-trained model for programming language (PL) and nat-ural language (NL)
We develop CodeBERT with Transformer-based neural architecture.
We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters.
arXiv Detail & Related papers (2020-02-19T13:09:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.