Dynamic Code Orchestration: Harnessing the Power of Large Language Models for Adaptive Script Execution
- URL: http://arxiv.org/abs/2408.11060v1
- Date: Wed, 7 Aug 2024 17:11:31 GMT
- Title: Dynamic Code Orchestration: Harnessing the Power of Large Language Models for Adaptive Script Execution
- Authors: Justin Del Vecchio, Andrew Perreault, Eliana Furmanek,
- Abstract summary: Research examines dynamic code execution of written language directives within the context of a running application.
The research clearly shows how written language directives, backed by a large language model, offer radically new programming and operating system paradigms.
- Score: 0.5735035463793009
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer programming initially required humans to directly translate their goals into machine code. These goals could have easily been expressed as a written (or human) language directive. Computers, however, had no capacity to satisfactorily interpret written language. Large language model's provide exactly this capability; automatic generation of computer programs or even assembly code from written language directives. This research examines dynamic code execution of written language directives within the context of a running application. It implements a text editor whose business logic is purely backed by large language model prompts. That is, the program's execution uses prompts and written language directives to dynamically generate application logic at the point in time it is needed. The research clearly shows how written language directives, backed by a large language model, offer radically new programming and operating system paradigms. For example, empowerment of users to directly implement requirements via written language directives, thus supplanting the need for a team ofprogrammers, a release schedule and the like. Or, new security mechanisms where static executables, always a target for reverse engineering or fuzzing, no longer exist. They are replaced by ephemeral executables that may continually change, be completely removed, and are easily updated.
Related papers
- Assured Automatic Programming via Large Language Models [8.006578501857447]
We aim to discover the programmer intent while generating code which conforms to the intent and a proof of this conformity.
Our objective is to achieve consistency between the program, the specification, and the test by refining our understanding of the user intent.
We demonstrate how the unambiguous intent discovered through our approach increases the percentage of verifiable auto-generated programs.
arXiv Detail & Related papers (2024-10-24T07:29:15Z) - AIOS Compiler: LLM as Interpreter for Natural Language Programming and Flow Programming of AI Agents [38.580779075892636]
We develop a novel system for Code Representation and Execution (CoRE)
The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents.
During the execution, we incorporate external memory to minimize redundancy.
arXiv Detail & Related papers (2024-05-11T04:29:03Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - NExT: Teaching Large Language Models to Reason about Code Execution [50.93581376646064]
Large language models (LLMs) of code are typically trained on the surface textual form of programs.
We propose NExT, a method to teach LLMs to inspect the execution traces of programs and reason about their run-time behavior.
arXiv Detail & Related papers (2024-04-23T01:46:32Z) - Code-Switched Language Identification is Harder Than You Think [69.63439391717691]
Code switching is a common phenomenon in written and spoken communication.
We look at the application of building CS corpora.
We make the task more realistic by scaling it to more languages.
We reformulate the task as a sentence-level multi-label tagging problem to make it more tractable.
arXiv Detail & Related papers (2024-02-02T15:38:47Z) - Context-Sensitive Abstract Interpretation of Dynamic Languages [0.0]
There is a vast gap in the quality of IDE tooling between static languages like Java and dynamic languages like Python or JavaScript.
Modern frameworks and libraries in these languages heavily use their dynamic capabilities to achieve the best ergonomics and readability.
We propose an algorithm that can bridge this gap by statically analyzing dynamic metaprogramming and runtime in programs.
arXiv Detail & Related papers (2024-01-31T17:45:05Z) - Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning [84.12154024070024]
We propose natural language embedded programs (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks.
Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge.
A Python interpreter then executes the generated code and prints the output.
arXiv Detail & Related papers (2023-09-19T17:54:21Z) - Natural Language to Code Translation with Execution [82.52142893010563]
Execution result--minimum Bayes risk decoding for program selection.
We show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks.
arXiv Detail & Related papers (2022-04-25T06:06:08Z) - Language Models are not Models of Language [0.0]
Transfer learning has enabled large deep learning neural networks trained on the language modeling task to vastly improve performance.
We argue that the term language model is misleading because deep learning models are not theoretical models of language.
arXiv Detail & Related papers (2021-12-13T22:39:46Z) - Using Document Similarity Methods to create Parallel Datasets for Code
Translation [60.36392618065203]
Translating source code from one programming language to another is a critical, time-consuming task.
We propose to use document similarity methods to create noisy parallel datasets of code.
We show that these models perform comparably to models trained on ground truth for reasonable levels of noise.
arXiv Detail & Related papers (2021-10-11T17:07:58Z) - Natural Language-guided Programming [1.3955252961896318]
We put forward a vision based on a new breed of developer tools that have the potential to largely automate this process.
Key idea is to adapt code autocompletion tools such that they take into account not only the developer's already-written code but also the intent of the task the developer is trying to achieve next.
We call this practice of enriching the code with natural language intent to facilitate its completion natural language-guided programming.
arXiv Detail & Related papers (2021-08-11T13:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.