Automated Unity Game Template Generation from GDDs via NLP and Multi-Modal LLMs
- URL: http://arxiv.org/abs/2509.08847v1
- Date: Sun, 07 Sep 2025 21:53:37 GMT
- Title: Automated Unity Game Template Generation from GDDs via NLP and Multi-Modal LLMs
- Authors: Amna Hassan,
- Abstract summary: This paper presents a novel framework for automated game template generation using Natural Language Processing (NLP) and multi-modal Large Language Models (LLMs)<n>We introduce an end-to-end system that parses Game Design Documents (GDDs) and extracts structured game specifications.<n>We synthesizes Unity-compatible C# code that implements the core mechanics, systems, and architecture defined in the design documentation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a novel framework for automated game template generation by transforming Game Design Documents (GDDs) into functional Unity game prototypes using Natural Language Processing (NLP) and multi-modal Large Language Models (LLMs). We introduce an end-to-end system that parses GDDs, extracts structured game specifications, and synthesizes Unity-compatible C# code that implements the core mechanics, systems, and architecture defined in the design documentation. Our approach combines a fine-tuned LLaMA-3 model specialized for Unity code generation with a custom Unity integration package that streamlines the implementation process. Evaluation results demonstrate significant improvements over baseline models, with our fine-tuned model achieving superior performance (4.8/5.0 average score) compared to state-of-the-art LLMs across compilation success, GDD adherence, best practices adoption, and code modularity metrics. The generated templates demonstrate high adherence to GDD specifications across multiple game genres. Our system effectively addresses critical gaps in AI-assisted game development, positioning LLMs as valuable tools in streamlining the transition from game design to implementation.
Related papers
- Real-Time World Crafting: Generating Structured Game Behaviors from Natural Language with Large Language Models [0.8869777013253825]
We present a novel architecture for safely integrating Large Language Models into interactive game engines.<n>Our framework mitigates risks by using an LLM to translate commands into a constrained Domain-Specific Language.<n>We evaluate this system in a 2D spell-crafting game prototype.
arXiv Detail & Related papers (2025-10-19T18:09:44Z) - V-GameGym: Visual Game Generation for Code Large Language Models [29.687615056084166]
V-GameGym is a comprehensive benchmark comprising 2,219 high-quality samples across 100 thematic clusters.<n>We introduce a multimodal evaluation framework with an automated LLM-driven pipeline for visual code synthesis.<n>Our analysis reveals that V-GameGym effectively bridges the gap between code generation accuracy and practical game development.
arXiv Detail & Related papers (2025-09-24T14:01:18Z) - GVGAI-LLM: Evaluating Large Language Model Agents with Infinite Games [8.640618631999173]
We introduce GVGAI-LLM, a video game benchmark for evaluating the reasoning and problem-solving capabilities of large language models (LLMs)<n>Built on the General Video Game AI framework, it features a diverse collection of arcade-style games designed to test a model's ability to handle tasks that differ from most existing LLM benchmarks.
arXiv Detail & Related papers (2025-08-11T22:17:07Z) - Type-Constrained Code Generation with Language Models [51.03439021895432]
We introduce a type-constrained decoding approach that leverages type systems to guide code generation.<n>For this purpose, we develop novel prefix automata and a search over inhabitable types, forming a sound approach to enforce well-typedness on LLM-generated code.<n>Our approach reduces compilation errors by more than half and significantly increases functional correctness in code synthesis, translation, and repair tasks.
arXiv Detail & Related papers (2025-04-12T15:03:00Z) - Cardiverse: Harnessing LLMs for Novel Card Game Prototyping [9.029874576285085]
Card games require extensive human effort in creative ideation and gameplay evaluation.<n>Recent advances in Large Language Models offer opportunities to automate these processes.<n>This paper introduces a comprehensive automated card game prototyping framework.
arXiv Detail & Related papers (2025-02-10T23:47:35Z) - Align$^2$LLaVA: Cascaded Human and Large Language Model Preference Alignment for Multi-modal Instruction Curation [56.75665429851673]
This paper introduces a novel instruction curation algorithm, derived from two unique perspectives, human and LLM preference alignment.<n>Experiments demonstrate that we can maintain or even improve model performance by compressing synthetic multimodal instructions by up to 90%.
arXiv Detail & Related papers (2024-09-27T08:20:59Z) - ULLME: A Unified Framework for Large Language Model Embeddings with Generation-Augmented Learning [72.90823351726374]
We introduce the Unified framework for Large Language Model Embedding (ULLME), a flexible, plug-and-play implementation that enables bidirectional attention across various LLMs.
We also propose Generation-augmented Representation Learning (GRL), a novel fine-tuning method to boost LLMs for text embedding tasks.
To showcase our framework's flexibility and effectiveness, we release three pre-trained models from ULLME with different backbone architectures.
arXiv Detail & Related papers (2024-08-06T18:53:54Z) - Design2Code: Benchmarking Multimodal Code Generation for Automated Front-End Engineering [74.99736967448423]
We construct Design2Code - the first real-world benchmark for this task.<n>We manually curate 484 diverse real-world webpages as test cases and develop a set of automatic evaluation metrics.<n>Our fine-grained break-down metrics indicate that models mostly lag in recalling visual elements from the input webpages and generating correct layout designs.
arXiv Detail & Related papers (2024-03-05T17:56:27Z) - LLM-Assisted Code Cleaning For Training Accurate Code Generators [53.087019724256606]
We investigate data quality for code and find that making the code more structured and readable leads to improved code generation performance of the system.
We build a novel data-cleaning pipeline that uses these principles to transform existing programs.
We evaluate our approach on two challenging algorithmic code generation benchmarks and find that fine-tuning CodeLLaMa-7B improves the performance by up to 30% compared to fine-tuning on the original dataset.
arXiv Detail & Related papers (2023-11-25T02:45:50Z) - 3D-GPT: Procedural 3D Modeling with Large Language Models [47.72968643115063]
We introduce 3D-GPT, a framework utilizing large language models(LLMs) for instruction-driven 3D modeling.
3D-GPT positions LLMs as proficient problem solvers, dissecting the procedural 3D modeling tasks into accessible segments and appointing the apt agent for each task.
Our empirical investigations confirm that 3D-GPT not only interprets and executes instructions, delivering reliable results but also collaborates effectively with human designers.
arXiv Detail & Related papers (2023-10-19T17:41:48Z) - GameGPT: Multi-agent Collaborative Framework for Game Development [10.8750049774263]
Large language model (LLM) based agents have demonstrated their capacity to automate and expedite software development processes.<n>We propose a multi-agent collaborative framework, dubbed GameGPT, to automate game development.
arXiv Detail & Related papers (2023-10-12T06:31:43Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.