Moonshine: Distilling Game Content Generators into Steerable Generative Models
- URL: http://arxiv.org/abs/2408.09594v1
- Date: Sun, 18 Aug 2024 20:59:59 GMT
- Title: Moonshine: Distilling Game Content Generators into Steerable Generative Models
- Authors: Yuhe Nie, Michael Middleton, Tim Merino, Nidhushan Kanagaraja, Ashutosh Kumar, Zhan Zhuang, Julian Togelius,
- Abstract summary: Procedural Content Generation via Machine Learning (PCGML) has enhanced game content creation, yet challenges in controllability and limited training data persist.
This study addresses these issues by distilling a constructive PCG algorithm into a controllable PCGML model.
- Score: 2.9690652756955305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Procedural Content Generation via Machine Learning (PCGML) has enhanced game content creation, yet challenges in controllability and limited training data persist. This study addresses these issues by distilling a constructive PCG algorithm into a controllable PCGML model. We first generate a large amount of content with a constructive algorithm and label it using a Large Language Model (LLM). We use these synthetic labels to condition two PCGML models for content-specific generation, a diffusion model and the five-dollar model. This neural network distillation process ensures that the generation aligns with the original algorithm while introducing controllability through plain text. We define this text-conditioned PCGML as a Text-to-game-Map (T2M) task, offering an alternative to prevalent text-to-image multi-modal tasks. We compare our distilled models with the baseline constructive algorithm. Our analysis of the variety, accuracy, and quality of our generation demonstrates the efficacy of distilling constructive methods into controllable text-conditioned PCGML models.
Related papers
- Image Regeneration: Evaluating Text-to-Image Model via Generating Identical Image with Multimodal Large Language Models [54.052963634384945]
We introduce the Image Regeneration task to assess text-to-image models.
We use GPT4V to bridge the gap between the reference image and the text input for the T2I model.
We also present ImageRepainter framework to enhance the quality of generated images.
arXiv Detail & Related papers (2024-11-14T13:52:43Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Text-Guided Molecule Generation with Diffusion Language Model [23.170313481324598]
We propose the Text-Guided Molecule Generation with Diffusion Language Model (TGM-DLM)
TGM-DLM updates token embeddings within the SMILES string collectively and iteratively, using a two-phase diffusion generation process.
We demonstrate that TGM-DLM outperforms MolT5-Base, an autoregressive model, without the need for additional data resources.
arXiv Detail & Related papers (2024-02-20T14:29:02Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Evaluating Generative Models for Graph-to-Text Generation [0.0]
We explore the capability of generative models to generate descriptive text from graph data in a zero-shot setting.
Our results demonstrate that generative models are capable of generating fluent and coherent text.
However, our error analysis reveals that generative models still struggle with understanding the semantic relations between entities.
arXiv Detail & Related papers (2023-07-27T09:03:05Z) - GIMLET: A Unified Graph-Text Model for Instruction-Based Molecule
Zero-Shot Learning [71.89623260998934]
This study investigates the feasibility of employing natural language instructions to accomplish molecule-related tasks in a zero-shot setting.
Existing molecule-text models perform poorly in this setting due to inadequate treatment of instructions and limited capacity for graphs.
We propose GIMLET, which unifies language models for both graph and text data.
arXiv Detail & Related papers (2023-05-28T18:27:59Z) - Extrapolating Multilingual Understanding Models as Multilingual
Generators [82.1355802012414]
This paper explores methods to empower multilingual understanding models the generation abilities to get a unified model.
We propose a textbfSemantic-textbfGuided textbfAlignment-then-Denoising (SGA) approach to adapt an encoder to a multilingual generator with a small number of new parameters.
arXiv Detail & Related papers (2023-05-22T15:33:21Z) - Stochastic Code Generation [1.7205106391379026]
Large language models pre-trained for code generation can generate high-quality short code but often struggle with generating coherent long code.
This issue is also observed in language modeling for long text generation.
In this study, we investigate whether this technique can be applied to code generation to improve coherence.
arXiv Detail & Related papers (2023-04-14T00:01:05Z) - Mix and Match: Learning-free Controllable Text Generation using Energy
Language Models [33.97800741890231]
We propose Mix and Match LM, a global score-based alternative for controllable text generation.
We interpret the task of controllable generation as drawing samples from an energy-based model.
We use a Metropolis-Hastings sampling scheme to sample from this energy-based model.
arXiv Detail & Related papers (2022-03-24T18:52:09Z) - Ensemble Learning For Mega Man Level Generation [2.6402344419230697]
We investigate the use of ensembles of Markov chains for procedurally generating emphMega Man levels.
We evaluate it on measures of playability and stylistic similarity in comparison to a non-ensemble, existing Markov chain approach.
arXiv Detail & Related papers (2021-07-27T00:16:23Z) - Data-to-text Generation with Macro Planning [61.265321323312286]
We propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods.
Our approach outperforms competitive baselines in terms of automatic and human evaluation.
arXiv Detail & Related papers (2021-02-04T16:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.