Tools for Landscape Analysis of Optimisation Problems in Procedural
Content Generation for Games
- URL: http://arxiv.org/abs/2302.08479v1
- Date: Thu, 16 Feb 2023 18:38:36 GMT
- Title: Tools for Landscape Analysis of Optimisation Problems in Procedural
Content Generation for Games
- Authors: Vanessa Volz and Boris Naujoks and Pascal Kerschke and Tea Tusar
- Abstract summary: Procedural Content Generation (PCG) refers to the (semi-)automatic generation of game content by algorithmic means.
A special class of these methods, which is commonly known as search-based PCG, treats the given task as an optimisation problem.
We will demonstrate in this paper that obtaining more information about the defined optimisation problem can substantially improve our understanding of how to approach the generation of content.
- Score: 0.6882042556551609
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The term Procedural Content Generation (PCG) refers to the (semi-)automatic
generation of game content by algorithmic means, and its methods are becoming
increasingly popular in game-oriented research and industry. A special class of
these methods, which is commonly known as search-based PCG, treats the given
task as an optimisation problem. Such problems are predominantly tackled by
evolutionary algorithms.
We will demonstrate in this paper that obtaining more information about the
defined optimisation problem can substantially improve our understanding of how
to approach the generation of content. To do so, we present and discuss three
efficient analysis tools, namely diagonal walks, the estimation of high-level
properties, as well as problem similarity measures. We discuss the purpose of
each of the considered methods in the context of PCG and provide guidelines for
the interpretation of the results received. This way we aim to provide methods
for the comparison of PCG approaches and eventually, increase the quality and
practicality of generated content in industry.
Related papers
- Procedural Content Generation in Games: A Survey with Insights on Emerging LLM Integration [1.03590082373586]
Procedural Content Generation (PCG) is defined as the automatic creation of game content using algorithms.
It can increase player engagement and ease the work of game designers.
Recent advances in deep learning approaches in PCG have enabled researchers and practitioners to create more sophisticated content.
It is the arrival of Large Language Models (LLMs) that truly disrupted the trajectory of PCG advancement.
arXiv Detail & Related papers (2024-10-21T05:10:13Z) - Optimizing Feature Selection with Genetic Algorithms: A Review of Methods and Applications [4.395397502990339]
Genetic algorithms (GAs) have been proposed to provide remedies for drawbacks by avoiding local optima and improving the selection process itself.
This manuscript presents a sweeping review on GA-based feature selection techniques in applications and their effectiveness across different domains.
arXiv Detail & Related papers (2024-09-05T22:28:42Z) - Large-scale Benchmarking of Metaphor-based Optimization Heuristics [5.081212121019668]
We run a set of 294 algorithm implementations on the BBOB function suite.
We investigate how the choice of the budget, the performance measure, or other aspects of experimental design impact the comparison of these algorithms.
arXiv Detail & Related papers (2024-02-15T08:54:46Z) - A Thorough Examination of Decoding Methods in the Era of LLMs [72.65956436513241]
Decoding methods play an indispensable role in converting language models from next-token predictors into practical task solvers.
This paper provides a comprehensive and multifaceted analysis of various decoding methods within the context of large language models.
Our findings reveal that decoding method performance is notably task-dependent and influenced by factors such as alignment, model size, and quantization.
arXiv Detail & Related papers (2024-02-10T11:14:53Z) - Discovering General Reinforcement Learning Algorithms with Adversarial
Environment Design [54.39859618450935]
We show that it is possible to meta-learn update rules, with the hope of discovering algorithms that can perform well on a wide range of RL tasks.
Despite impressive initial results from algorithms such as Learned Policy Gradient (LPG), there remains a gap when these algorithms are applied to unseen environments.
In this work, we examine how characteristics of the meta-supervised-training distribution impact the performance of these algorithms.
arXiv Detail & Related papers (2023-10-04T12:52:56Z) - A new derivative-free optimization method: Gaussian Crunching Search [0.0]
We introduce a novel optimization method called Gaussian Crunching Search (GCS)
Inspired by the behaviour of particles in a Gaussian distribution, GCS aims to efficiently explore the solution space and converge towards the global optimum.
This research paper serves as a valuable resource for researchers, practitioners, and students interested in optimization.
arXiv Detail & Related papers (2023-07-24T16:17:53Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Geometric Methods for Sampling, Optimisation, Inference and Adaptive
Agents [102.42623636238399]
We identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making.
We derive algorithms that exploit these geometric structures to solve these problems efficiently.
arXiv Detail & Related papers (2022-03-20T16:23:17Z) - Information Theoretic Meta Learning with Gaussian Processes [74.54485310507336]
We formulate meta learning using information theoretic concepts; namely, mutual information and the information bottleneck.
By making use of variational approximations to the mutual information, we derive a general and tractable framework for meta learning.
arXiv Detail & Related papers (2020-09-07T16:47:30Z) - PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees [77.67258935234403]
We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning.
We develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization.
arXiv Detail & Related papers (2020-02-13T15:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.