Graph Neural Networks for Job Shop Scheduling Problems: A Survey
- URL: http://arxiv.org/abs/2406.14096v1
- Date: Thu, 20 Jun 2024 08:22:07 GMT
- Title: Graph Neural Networks for Job Shop Scheduling Problems: A Survey
- Authors: Igor G. Smit, Jianan Zhou, Robbert Reijnen, Yaoxin Wu, Jian Chen, Cong Zhang, Zaharah Bukhsh, Wim Nuijten, Yingqian Zhang,
- Abstract summary: Job shop scheduling problems (JSSPs) represent a critical and challenging class of optimization problems.
Recent years have witnessed a rapid increase in the application of graph neural networks (GNNs) to solve JSSPs.
This paper aims to thoroughly review prevailing GNN methods for different types of JSSPs and the closely related flow-shop scheduling problems.
- Score: 9.072608705759322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Job shop scheduling problems (JSSPs) represent a critical and challenging class of combinatorial optimization problems. Recent years have witnessed a rapid increase in the application of graph neural networks (GNNs) to solve JSSPs, albeit lacking a systematic survey of the relevant literature. This paper aims to thoroughly review prevailing GNN methods for different types of JSSPs and the closely related flow-shop scheduling problems (FSPs), especially those leveraging deep reinforcement learning (DRL). We begin by presenting the graph representations of various JSSPs, followed by an introduction to the most commonly used GNN architectures. We then review current GNN-based methods for each problem type, highlighting key technical elements such as graph representations, GNN architectures, GNN tasks, and training algorithms. Finally, we summarize and analyze the advantages and limitations of GNNs in solving JSSPs and provide potential future research opportunities. We hope this survey can motivate and inspire innovative approaches for more powerful GNN-based approaches in tackling JSSPs and other scheduling problems.
Related papers
- Learning Regularization for Graph Inverse Problems [16.062351610520693]
We introduce a framework leveraging GNNs to solve Graph Inverse Problems (GRIP)
The framework is based on a combination of likelihood and prior terms, which are used to find a solution that fits the data.
We study our approach on a number of representative problems that demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-08-19T22:03:02Z) - Acceleration Algorithms in GNNs: A Survey [34.28669696478494]
Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph-based tasks.
Their inefficiency in training and inference presents challenges for scaling up to real-world and large-scale graph applications.
A range of algorithms have been proposed to accelerate training and inference of GNNs.
arXiv Detail & Related papers (2024-05-07T08:34:33Z) - A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and
Customized Hardware [30.525912505620685]
Graph neural networks (GNNs) are emerging for machine learning research on graph-structured data.
GNNs achieve state-of-the-art performance on many tasks, but they face scalability challenges when it comes to real-world applications.
We provide a taxonomy of GNN acceleration, review the existing approaches, and suggest future research directions.
arXiv Detail & Related papers (2023-06-24T20:20:45Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Ranking Structured Objects with Graph Neural Networks [0.0]
RankGNNs are trained with a set of pair-wise preferences between graphs, suggesting that one of them is preferred over the other.
One practical application of this problem is drug screening, where an expert wants to find the most promising molecules in a large collection of drug candidates.
We empirically demonstrate that our proposed pair-wise RankGNN approach either significantly outperforms or at least matches the ranking performance of the naive point-wise baseline approach.
arXiv Detail & Related papers (2021-04-18T14:40:59Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Computing Graph Neural Networks: A Survey from Algorithms to
Accelerators [2.491032752533246]
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data.
This paper aims to make two main contributions: a review of the field of GNNs is presented from the perspective of computing.
An in-depth analysis of current software and hardware acceleration schemes is provided.
arXiv Detail & Related papers (2020-09-30T22:29:27Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.