GPT-Based Models Meet Simulation: How to Efficiently Use Large-Scale
Pre-Trained Language Models Across Simulation Tasks
- URL: http://arxiv.org/abs/2306.13679v1
- Date: Wed, 21 Jun 2023 15:42:36 GMT
- Title: GPT-Based Models Meet Simulation: How to Efficiently Use Large-Scale
Pre-Trained Language Models Across Simulation Tasks
- Authors: Philippe J. Giabbanelli
- Abstract summary: This paper is the first examination regarding the use of large-scale pre-trained language models for scientific simulations.
The first task is devoted to explaining the structure of a conceptual model to promote the engagement of participants.
The second task focuses on summarizing simulation outputs, so that model users can identify a preferred scenario.
The third task seeks to broaden accessibility to simulation platforms by conveying the insights of simulation visualizations via text.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The disruptive technology provided by large-scale pre-trained language models
(LLMs) such as ChatGPT or GPT-4 has received significant attention in several
application domains, often with an emphasis on high-level opportunities and
concerns. This paper is the first examination regarding the use of LLMs for
scientific simulations. We focus on four modeling and simulation tasks, each
time assessing the expected benefits and limitations of LLMs while providing
practical guidance for modelers regarding the steps involved. The first task is
devoted to explaining the structure of a conceptual model to promote the
engagement of participants in the modeling process. The second task focuses on
summarizing simulation outputs, so that model users can identify a preferred
scenario. The third task seeks to broaden accessibility to simulation platforms
by conveying the insights of simulation visualizations via text. Finally, the
last task evokes the possibility of explaining simulation errors and providing
guidance to resolve them.
Related papers
- GenSim: A General Social Simulation Platform with Large Language Model based Agents [111.00666003559324]
We propose a novel large language model (LLMs)-based simulation platform called textitGenSim.
Our platform supports one hundred thousand agents to better simulate large-scale populations in real-world contexts.
To our knowledge, GenSim represents an initial step toward a general, large-scale, and correctable social simulation platform.
arXiv Detail & Related papers (2024-10-06T05:02:23Z) - Can LLMs Reliably Simulate Human Learner Actions? A Simulation Authoring Framework for Open-Ended Learning Environments [1.4999444543328293]
Simulating learner actions helps stress-test open-ended interactive learning environments and prototype new adaptations before deployment.
We propose Hyp-Mix, a simulation authoring framework that allows experts to develop and evaluate simulations by combining testable hypotheses about learner behavior.
arXiv Detail & Related papers (2024-10-03T00:25:40Z) - A Practitioner's Guide to Continual Multimodal Pretraining [83.63894495064855]
Multimodal foundation models serve numerous applications at the intersection of vision and language.
To keep models updated, research into continual pretraining mainly explores scenarios with either infrequent, indiscriminate updates on large-scale new data, or frequent, sample-level updates.
We introduce FoMo-in-Flux, a continual multimodal pretraining benchmark with realistic compute constraints and practical deployment requirements.
arXiv Detail & Related papers (2024-08-26T17:59:01Z) - LLM experiments with simulation: Large Language Model Multi-Agent System for Simulation Model Parametrization in Digital Twins [4.773175285216063]
This paper presents a novel framework that applies large language models (LLMs) to automate the parametrization of simulation models in digital twins.
The proposed approach enhances the usability of simulation model by infusing it with knowledges from LLM.
The system has the potential to increase user-friendliness and reduce the cognitive load on human users.
arXiv Detail & Related papers (2024-05-28T11:59:40Z) - Enhancing Robotic Manipulation with AI Feedback from Multimodal Large
Language Models [41.38520841504846]
Large language models (LLMs) can provide automated preference feedback solely from image inputs to guide decision-making.
In this study, we train a multimodal LLM, termed CriticGPT, capable of understanding trajectory videos in robot manipulation tasks.
Experimental evaluation of the algorithm's preference accuracy demonstrates its effective generalization ability to new tasks.
Performance on Meta-World tasks reveals that CriticGPT's reward model efficiently guides policy learning, surpassing rewards based on state-of-the-art pre-trained representation models.
arXiv Detail & Related papers (2024-02-22T03:14:03Z) - Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [73.40350756742231]
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning.
Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored.
arXiv Detail & Related papers (2024-02-12T18:21:14Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Can LLMs Fix Issues with Reasoning Models? Towards More Likely Models
for AI Planning [26.239075588286127]
This is the first work to look at the application of large language models (LLMs) for the purpose of model space edits in automated planning tasks.
We empirically demonstrate how the performance of an LLM contrasts with search (CS)
Our experiments show promising results suggesting further forays into the exciting world of model space reasoning for planning tasks in the future.
arXiv Detail & Related papers (2023-11-22T22:27:47Z) - GenSim: Generating Robotic Simulation Tasks via Large Language Models [34.79613485106202]
GenSim aims to automatically generate rich simulation environments and expert demonstrations.
We use GPT4 to expand the existing benchmark by ten times to over 100 tasks.
With minimal sim-to-real adaptation, multitask policies pretrained on GPT4-generated simulation tasks exhibit stronger transfer to unseen long-horizon tasks in the real world.
arXiv Detail & Related papers (2023-10-02T17:23:48Z) - Forging Multiple Training Objectives for Pre-trained Language Models via
Meta-Learning [97.28779163988833]
Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling.
We propose textitMOMETAS, a novel adaptive sampler based on meta-learning, which learns the latent sampling pattern on arbitrary pre-training objectives.
arXiv Detail & Related papers (2022-10-19T04:38:26Z) - Learning Predictive Representations for Deformable Objects Using
Contrastive Estimation [83.16948429592621]
We propose a new learning framework that jointly optimize both the visual representation model and the dynamics model.
We show substantial improvements over standard model-based learning techniques across our rope and cloth manipulation suite.
arXiv Detail & Related papers (2020-03-11T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.