G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning
- URL: http://arxiv.org/abs/2407.10483v1
- Date: Mon, 15 Jul 2024 07:11:00 GMT
- Title: G-PCGRL: Procedural Graph Data Generation via Reinforcement Learning
- Authors: Florian Rupp, Kai Eckert,
- Abstract summary: In games, graph-based data structures are omnipresent and represent game economies, skill trees or complex, branching quest lines.
We propose a novel and controllable method for the procedural generation of graph data using reinforcement learning.
Our method is capable of generating graph-based content quickly and reliably to support and inspire designers in the game creation process.
- Score: 0.28273304533873334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph data structures offer a versatile and powerful means to model relationships and interconnections in various domains, promising substantial advantages in data representation, analysis, and visualization. In games, graph-based data structures are omnipresent and represent, for example, game economies, skill trees or complex, branching quest lines. With this paper, we propose G-PCGRL, a novel and controllable method for the procedural generation of graph data using reinforcement learning. Therefore, we frame this problem as manipulating a graph's adjacency matrix to fulfill a given set of constraints. Our method adapts and extends the Procedural Content Generation via Reinforcement Learning (PCGRL) framework and introduces new representations to frame the problem of graph data generation as a Markov decision process. We compare the performance of our method with the original PCGRL, the run time with a random search and evolutionary algorithm, and evaluate G-PCGRL on two graph data domains in games: game economies and skill trees. The results show that our method is capable of generating graph-based content quickly and reliably to support and inspire designers in the game creation process. In addition, trained models are controllable in terms of the type and number of nodes to be generated.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - SCGG: A Deep Structure-Conditioned Graph Generative Model [9.046174529859524]
A conditional deep graph generation method called SCGG considers a particular type of structural conditions.
The architecture of SCGG consists of a graph representation learning network and an autoregressive generative model, which is trained end-to-end.
Experimental results on both synthetic and real-world datasets demonstrate the superiority of our method compared with state-of-the-art baselines.
arXiv Detail & Related papers (2022-09-20T12:33:50Z) - Relational Self-Supervised Learning on Graphs [6.891327852064418]
Graph representation learning (GRL) has been a powerful strategy for analyzing graph-structured data.
We propose a novel GRL method, called RGRL, that learns from the relational information generated from the graph itself.
By considering the relationship among nodes in both global and local perspectives, RGRL overcomes limitations of previous contrastive and non-contrastive methods.
arXiv Detail & Related papers (2022-08-21T12:33:16Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Augmentation-Free Self-Supervised Learning on Graphs [7.146027549101716]
We propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL.
Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph.
arXiv Detail & Related papers (2021-12-05T04:20:44Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.