From Natural Language to Simulations: Applying GPT-3 Codex to Automate
Simulation Modeling of Logistics Systems
- URL: http://arxiv.org/abs/2202.12107v3
- Date: Thu, 30 Mar 2023 21:00:17 GMT
- Title: From Natural Language to Simulations: Applying GPT-3 Codex to Automate
Simulation Modeling of Logistics Systems
- Authors: Ilya Jackson and Maria Jesus Saenz
- Abstract summary: This work is the first attempt to apply Natural Language Processing to automate the development of simulation models of systems vitally important for logistics.
We demonstrated that the framework built on top of the fine-tuned GPT-3 Codex, a Transformer-based language model, could produce functionally valid simulations of queuing and inventory control systems given the verbal description.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Our work is the first attempt to apply Natural Language Processing to
automate the development of simulation models of systems vitally important for
logistics. We demonstrated that the framework built on top of the fine-tuned
GPT-3 Codex, a Transformer-based language model, could produce functionally
valid simulations of queuing and inventory control systems given the verbal
description. In conducted experiments, GPT-3 Codex demonstrated convincing
expertise in Python as well as an understanding of the domain-specific
vocabulary. As a result, the language model could produce simulations of a
single-product inventory-control system and single-server queuing system given
the domain-specific context, a detailed description of the process, and a list
of variables with the corresponding values. The demonstrated results, along
with the rapid improvement of language models, open the door for significant
simplification of the workflow behind the simulation model development, which
will allow experts to focus on the high-level consideration of the problem and
holistic thinking.
Related papers
- Generating Driving Simulations via Conversation [20.757088470174452]
We design a natural language interface to assist a non-coding domain expert in synthesising the desired scenarios and vehicle behaviours.
We show that using it to convert utterances to the symbolic program is feasible, despite the very small training dataset.
Human experiments show that dialogue is critical to successful simulation generation, leading to a 4.5 times higher success rate than a generation without engaging in extended conversation.
arXiv Detail & Related papers (2024-10-13T13:07:31Z) - FactorSim: Generative Simulation via Factorized Representation [14.849320460718591]
We introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents.
For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code's accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings.
We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (e.g., accuracy), zero-shot transfer abilities, and human evaluation.
arXiv Detail & Related papers (2024-09-26T09:00:30Z) - Multi-Faceted Evaluation of Modeling Languages for Augmented Reality Applications -- The Case of ARWFML [0.0]
The Augmented Reality Modeling Language (ARWFML) enables the model-based creation of augmented reality scenarios without programming knowledge.
This paper presents two further design iterations for refining the language based on multi-faceted evaluations.
arXiv Detail & Related papers (2024-08-26T09:34:36Z) - LangSuitE: Planning, Controlling and Interacting with Large Language Models in Embodied Text Environments [70.91258869156353]
We introduce LangSuitE, a versatile and simulation-free testbed featuring 6 representative embodied tasks in textual embodied worlds.
Compared with previous LLM-based testbeds, LangSuitE offers adaptability to diverse environments without multiple simulation engines.
We devise a novel chain-of-thought (CoT) schema, EmMem, which summarizes embodied states w.r.t. history information.
arXiv Detail & Related papers (2024-06-24T03:36:29Z) - On Languaging a Simulation Engine [6.17566001699186]
Lang2Sim is a language-to-simulation framework that enables interactive navigation on languaging a simulation engine.
This work establishes language model as an intelligent platform to unlock the era of languaging a simulation engine.
arXiv Detail & Related papers (2024-02-26T11:01:54Z) - Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [73.40350756742231]
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning.
Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored.
arXiv Detail & Related papers (2024-02-12T18:21:14Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - On Conditional and Compositional Language Model Differentiable Prompting [75.76546041094436]
Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.
We propose a new model, Prompt Production System (PRopS), which learns to transform task instructions or input metadata, into continuous prompts.
arXiv Detail & Related papers (2023-07-04T02:47:42Z) - GPT-Based Models Meet Simulation: How to Efficiently Use Large-Scale
Pre-Trained Language Models Across Simulation Tasks [0.0]
This paper is the first examination regarding the use of large-scale pre-trained language models for scientific simulations.
The first task is devoted to explaining the structure of a conceptual model to promote the engagement of participants.
The second task focuses on summarizing simulation outputs, so that model users can identify a preferred scenario.
The third task seeks to broaden accessibility to simulation platforms by conveying the insights of simulation visualizations via text.
arXiv Detail & Related papers (2023-06-21T15:42:36Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Exploring Software Naturalness through Neural Language Models [56.1315223210742]
The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing.
We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.
arXiv Detail & Related papers (2020-06-22T21:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.