IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation
- URL: http://arxiv.org/abs/2503.12358v3
- Date: Tue, 25 Mar 2025 01:48:16 GMT
- Title: IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation
- Authors: In-Chang Baek, Sung-Hyun Kim, Seo-Young Lee, Dong-Hyeon Kim, Kyung-Joong Kim,
- Abstract summary: IPCGRL is an instruction-based procedural content generation method via reinforcement learning.<n>IPCGRL fine-tunes task-specific embedding representations to effectively compress game-level conditions.
- Score: 11.71881275085903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has highlighted the significance of natural language in enhancing the controllability of generative models. While various efforts have been made to leverage natural language for content generation, research on deep reinforcement learning (DRL) agents utilizing text-based instructions for procedural content generation remains limited. In this paper, we propose IPCGRL, an instruction-based procedural content generation method via reinforcement learning, which incorporates a sentence embedding model. IPCGRL fine-tunes task-specific embedding representations to effectively compress game-level conditions. We evaluate IPCGRL in a two-dimensional level generation task and compare its performance with a general-purpose embedding method. The results indicate that IPCGRL achieves up to a 21.4% improvement in controllability and a 17.2% improvement in generalizability for unseen instructions. Furthermore, the proposed method extends the modality of conditional input, enabling a more flexible and expressive interaction framework for procedural content generation.
Related papers
- Context-Guided Dynamic Retrieval for Improving Generation Quality in RAG Models [2.9687381456164004]
It proposes a state-aware dynamic knowledge retrieval mechanism to enhance semantic understanding and knowledge scheduling efficiency.
The proposed structure is thoroughly evaluated across different large models, including GPT-4, GPT-4o, and DeepSeek.
The approach also demonstrates stronger robustness and generation consistency in tasks involving semantic ambiguity and multi-document fusion.
arXiv Detail & Related papers (2025-04-28T02:50:45Z) - Leveraging Large Language Models to Generate Course-specific Semantically Annotated Learning Objects [2.1845291030915974]
Recent progress in generative natural language models has opened up new potential in the generation of educational content.<n>This paper explores the potential of large language models for generating computer science questions that are sufficiently annotated for automatic learner model updates.
arXiv Detail & Related papers (2024-12-05T14:24:07Z) - Reinforcement Learning with Token-level Feedback for Controllable Text Generation [16.117006822479407]
We propose a novel reinforcement learning algorithm named TOLE which formulates TOken-LEvel rewards for controllable text generation.
Experimental results show that our algorithm can achieve superior performance on both single-attribute and multi-attribute control tasks.
arXiv Detail & Related papers (2024-03-18T08:18:37Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - From Classification to Generation: Insights into Crosslingual Retrieval
Augmented ICL [8.065775937617417]
We introduce a novel approach that leverages cross-lingual retrieval-augmented in-context learning (CREA-ICL)
By extracting semantically similar prompts from high-resource languages, we aim to improve the zero-shot performance of multilingual pre-trained language models (MPLMs)
Though our approach yields steady improvements in classification tasks, it faces challenges in generation tasks.
arXiv Detail & Related papers (2023-11-11T15:40:21Z) - Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions [53.21504989297547]
We propose a new method that combines a language model and reinforcement learning for the task of building objects in a Minecraft-like environment.
Our method first generates a set of consistently achievable sub-goals from the instructions and then completes associated sub-tasks with a pre-trained RL policy.
arXiv Detail & Related papers (2022-11-01T18:30:42Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z) - Unsupervised Text Generation by Learning from Search [86.51619839836331]
TGLS is a novel framework to unsupervised Text Generation by Learning.
We demonstrate the effectiveness of TGLS on two real-world natural language generation tasks, paraphrase generation and text formalization.
arXiv Detail & Related papers (2020-07-09T04:34:48Z) - A Hybrid Natural Language Generation System Integrating Rules and Deep
Learning Algorithms [13.288402527470591]
This paper proposes an enhanced natural language generation system combining the merits of both rule-based approaches and modern deep learning algorithms.
We also come up with a novel approach called HMCU to measure the performance of the natural language processing comprehensively and precisely.
arXiv Detail & Related papers (2020-06-15T00:50:41Z) - Stylistic Dialogue Generation via Information-Guided Reinforcement
Learning Strategy [65.98002918470544]
We introduce a new training strategy, know as Information-Guided Reinforcement Learning (IG-RL)
In IG-RL, a training model is encouraged to explore stylistic expressions while being constrained to maintain its content quality.
This is achieved by adopting reinforcement learning strategy with statistical style information guidance for quality-preserving explorations.
arXiv Detail & Related papers (2020-04-05T13:58:14Z) - Tree-Structured Policy based Progressive Reinforcement Learning for
Temporally Language Grounding in Video [128.08590291947544]
Temporally language grounding in untrimmed videos is a newly-raised task in video understanding.
Inspired by human's coarse-to-fine decision-making paradigm, we formulate a novel Tree-Structured Policy based Progressive Reinforcement Learning framework.
arXiv Detail & Related papers (2020-01-18T15:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.