Learning the Quality of Machine Permutations in Job Shop Scheduling
- URL: http://arxiv.org/abs/2207.03244v1
- Date: Thu, 7 Jul 2022 11:53:10 GMT
- Title: Learning the Quality of Machine Permutations in Job Shop Scheduling
- Authors: Andrea Corsini, Simone Calderara, and Mauro Dell'Amico
- Abstract summary: We propose a novel supervised learning task that aims at predicting the quality of machine permutations.
Then, we design an original methodology to estimate this quality that allows to create an accurate sequential deep learning model.
- Score: 9.972171952370287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the power demonstrated by Machine Learning (ML) has
increasingly attracted the interest of the optimization community that is
starting to leverage ML for enhancing and automating the design of optimal and
approximate algorithms. One combinatorial optimization problem that has been
tackled with ML is the Job Shop scheduling Problem (JSP). Most of the recent
works focusing on the JSP and ML are based on Deep Reinforcement Learning
(DRL), and only a few of them leverage supervised learning techniques. The
recurrent reasons for avoiding supervised learning seem to be the difficulty in
casting the right learning task, i.e., what is meaningful to predict, and how
to obtain labels. Therefore, we first propose a novel supervised learning task
that aims at predicting the quality of machine permutations. Then, we design an
original methodology to estimate this quality that allows to create an accurate
sequential deep learning model (binary accuracy above 95%). Finally, we
empirically demonstrate the value of predicting the quality of machine
permutations by enhancing the performance of a simple Tabu Search algorithm
inspired by the works in the literature.
Related papers
- Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Machine Learning Constructives and Local Searches for the Travelling
Salesman Problem [7.656272344163667]
We present improvements to the computational weight of the original deep learning model.
The possibility of adding a local-search phase is explored to further improve performance.
arXiv Detail & Related papers (2021-08-02T14:34:44Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - First-order Optimization for Superquantile-based Supervised Learning [0.0]
We propose a first-order optimization algorithm to minimize a superquantile-based learning objective.
The proposed algorithm is based on smoothing the superquantile function by infimal convolution.
arXiv Detail & Related papers (2020-09-30T11:43:45Z) - Mastering Rate based Curriculum Learning [78.45222238426246]
We argue that the notion of learning progress itself has several shortcomings that lead to a low sample efficiency for the learner.
We propose a new algorithm, based on the notion of mastering rate, that significantly outperforms learning progress-based algorithms.
arXiv Detail & Related papers (2020-08-14T16:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.