Challenges and Opportunities in Deep Reinforcement Learning with Graph
Neural Networks: A Comprehensive review of Algorithms and Applications
- URL: http://arxiv.org/abs/2206.07922v1
- Date: Thu, 16 Jun 2022 04:52:22 GMT
- Title: Challenges and Opportunities in Deep Reinforcement Learning with Graph
Neural Networks: A Comprehensive review of Algorithms and Applications
- Authors: Sai Munikoti, Deepesh Agarwal, Laya Das, Mahantesh Halappanavar,
Balasubramaniam Natarajan
- Abstract summary: In recent times, the fusion of GNN with DRL for graph-structured environments has attracted a lot of attention.
This paper provides a comprehensive review of these hybrid works.
- Score: 1.4099477870728594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep reinforcement learning (DRL) has empowered a variety of artificial
intelligence fields, including pattern recognition, robotics,
recommendation-systems, and gaming. Similarly, graph neural networks (GNN) have
also demonstrated their superior performance in supervised learning for
graph-structured data. In recent times, the fusion of GNN with DRL for
graph-structured environments has attracted a lot of attention. This paper
provides a comprehensive review of these hybrid works. These works can be
classified into two categories: (1) algorithmic enhancement, where DRL and GNN
complement each other for better utility; (2) application-specific enhancement,
where DRL and GNN support each other. This fusion effectively addresses various
complex problems in engineering and life sciences. Based on the review, we
further analyze the applicability and benefits of fusing these two domains,
especially in terms of increasing generalizability and reducing computational
complexity. Finally, the key challenges in integrating DRL and GNN, and
potential future research directions are highlighted, which will be of interest
to the broader machine learning community.
Related papers
- Graph Neural Networks for Job Shop Scheduling Problems: A Survey [9.072608705759322]
Job shop scheduling problems (JSSPs) represent a critical and challenging class of optimization problems.
Recent years have witnessed a rapid increase in the application of graph neural networks (GNNs) to solve JSSPs.
This paper aims to thoroughly review prevailing GNN methods for different types of JSSPs and the closely related flow-shop scheduling problems.
arXiv Detail & Related papers (2024-06-20T08:22:07Z) - Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - Unleash Graph Neural Networks from Heavy Tuning [33.948899558876604]
Graph Neural Networks (GNNs) are deep-learning architectures designed for graph-type data.
We propose a graph conditional latent diffusion framework (GNN-Diff) to generate high-performing GNNs directly by learning from checkpoints saved during a light-tuning coarse search.
arXiv Detail & Related papers (2024-05-21T06:23:47Z) - Exploring Causal Learning through Graph Neural Networks: An In-depth
Review [12.936700685252145]
We introduce a novel taxonomy that encompasses various state-of-the-art GNN methods employed in studying causality.
GNNs are further categorized based on their applications in the causality domain.
This review also touches upon the application of causal learning across diverse sectors.
arXiv Detail & Related papers (2023-11-25T10:46:06Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Renormalized Graph Neural Networks [4.200261123369236]
Graph Neural Networks (GNNs) have become essential for studying complex data, particularly when represented as graphs.
This paper proposes a new approach that applies renormalization group theory to improve GNNs' performance on graph-related tasks.
arXiv Detail & Related papers (2023-06-01T14:16:43Z) - A Comprehensive Survey on Distributed Training of Graph Neural Networks [59.785830738482474]
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields.
To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training.
The volume of related research on distributed GNN training is exceptionally vast, accompanied by an extraordinarily rapid pace of publication.
arXiv Detail & Related papers (2022-11-10T06:22:12Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Computing Graph Neural Networks: A Survey from Algorithms to
Accelerators [2.491032752533246]
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data.
This paper aims to make two main contributions: a review of the field of GNNs is presented from the perspective of computing.
An in-depth analysis of current software and hardware acceleration schemes is provided.
arXiv Detail & Related papers (2020-09-30T22:29:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.