On Languaging a Simulation Engine
- URL: http://arxiv.org/abs/2402.16482v1
- Date: Mon, 26 Feb 2024 11:01:54 GMT
- Title: On Languaging a Simulation Engine
- Authors: Han Liu, Liantang Li
- Abstract summary: Lang2Sim is a language-to-simulation framework that enables interactive navigation on languaging a simulation engine.
This work establishes language model as an intelligent platform to unlock the era of languaging a simulation engine.
- Score: 6.17566001699186
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Language model intelligence is revolutionizing the way we program materials
simulations. However, the diversity of simulation scenarios renders it
challenging to precisely transform human language into a tailored simulator.
Here, using three functionalized types of language model, we propose a
language-to-simulation (Lang2Sim) framework that enables interactive navigation
on languaging a simulation engine, by taking a scenario instance of water
sorption in porous matrices. Unlike line-by-line coding of a target simulator,
the language models interpret each simulator as an assembly of invariant tool
function and its variant input-output pair. Lang2Sim enables the precise
transform of textual description by functionalizing and sequentializing the
language models of, respectively, rationalizing the tool categorization,
customizing its input-output combinations, and distilling the simulator input
into executable format. Importantly, depending on its functionalized type, each
language model features a distinct processing of chat history to best balance
its memory limit and information completeness, thus leveraging the model
intelligence to unstructured nature of human request. Overall, this work
establishes language model as an intelligent platform to unlock the era of
languaging a simulation engine.
Related papers
- Generating Driving Simulations via Conversation [20.757088470174452]
We design a natural language interface to assist a non-coding domain expert in synthesising the desired scenarios and vehicle behaviours.
We show that using it to convert utterances to the symbolic program is feasible, despite the very small training dataset.
Human experiments show that dialogue is critical to successful simulation generation, leading to a 4.5 times higher success rate than a generation without engaging in extended conversation.
arXiv Detail & Related papers (2024-10-13T13:07:31Z) - FactorSim: Generative Simulation via Factorized Representation [14.849320460718591]
We introduce FACTORSIM that generates full simulations in code from language input that can be used to train agents.
For evaluation, we introduce a generative simulation benchmark that assesses the generated simulation code's accuracy and effectiveness in facilitating zero-shot transfers in reinforcement learning settings.
We show that FACTORSIM outperforms existing methods in generating simulations regarding prompt alignment (e.g., accuracy), zero-shot transfer abilities, and human evaluation.
arXiv Detail & Related papers (2024-09-26T09:00:30Z) - ChatSUMO: Large Language Model for Automating Traffic Scenario Generation in Simulation of Urban MObility [5.111204055180423]
Large Language Models (LLMs) are capable of handling multi-modal input and outputs such as text, voice, images, and video.
We present ChatSUMO, a LLM-based agent that integrates language processing skills to generate abstract and real-world simulation scenarios.
For simulation generation, we created a real-world simulation for the city of Albany with an accuracy of 96%.
arXiv Detail & Related papers (2024-08-29T03:59:11Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - Modeling Target-Side Morphology in Neural Machine Translation: A
Comparison of Strategies [72.56158036639707]
Morphologically rich languages pose difficulties to machine translation.
A large amount of differently inflected word surface forms entails a larger vocabulary.
Some inflected forms of infrequent terms typically do not appear in the training corpus.
Linguistic agreement requires the system to correctly match the grammatical categories between inflected word forms in the output sentence.
arXiv Detail & Related papers (2022-03-25T10:13:20Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - From Natural Language to Simulations: Applying GPT-3 Codex to Automate
Simulation Modeling of Logistics Systems [0.0]
This work is the first attempt to apply Natural Language Processing to automate the development of simulation models of systems vitally important for logistics.
We demonstrated that the framework built on top of the fine-tuned GPT-3 Codex, a Transformer-based language model, could produce functionally valid simulations of queuing and inventory control systems given the verbal description.
arXiv Detail & Related papers (2022-02-24T14:01:50Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Language Model-Based Paired Variational Autoencoders for Robotic Language Learning [18.851256771007748]
Similar to human infants, artificial agents can learn language while interacting with their environment.
We present a neural model that bidirectionally binds robot actions and their language descriptions in a simple object manipulation scenario.
Next, we introduce PVAE-BERT, which equips the model with a pretrained large-scale language model.
arXiv Detail & Related papers (2022-01-17T10:05:26Z) - SML: a new Semantic Embedding Alignment Transformer for efficient
cross-lingual Natural Language Inference [71.57324258813674]
The ability of Transformers to perform with precision a variety of tasks such as question answering, Natural Language Inference (NLI) or summarising, have enable them to be ranked as one of the best paradigms to address this kind of tasks at present.
NLI is one of the best scenarios to test these architectures, due to the knowledge required to understand complex sentences and established a relation between a hypothesis and a premise.
In this paper, we propose a new architecture, siamese multilingual transformer, to efficiently align multilingual embeddings for Natural Language Inference.
arXiv Detail & Related papers (2021-03-17T13:23:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.