Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness
- URL: http://arxiv.org/abs/2402.14847v1
- Date: Mon, 19 Feb 2024 15:34:09 GMT
- Title: Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness
- Authors: Michal Bou\v{s}ka, P\v{r}emysl \v{S}\r{u}cha, Anton\'in Nov\'ak,
Zden\v{e}k Hanz\'alek
- Abstract summary: We propose a deep neural network that acts as a decomposition-time estimator of the criterion value used in a single-pass scheduling algorithm.
We show that our machine learning-driven approach can efficiently generalize information from the training phase to significantly larger instances.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we investigate the use of the deep learning method for solving
a well-known NP-hard single machine scheduling problem with the objective of
minimizing the total tardiness. We propose a deep neural network that acts as a
polynomial-time estimator of the criterion value used in a single-pass
scheduling algorithm based on Lawler's decomposition and symmetric
decomposition proposed by Della Croce et al. Essentially, the neural network
guides the algorithm by estimating the best splitting of the problem into
subproblems. The paper also describes a new method for generating the training
data set, which speeds up the training dataset generation and reduces the
average optimality gap of solutions. The experimental results show that our
machine learning-driven approach can efficiently generalize information from
the training phase to significantly larger instances. Even though the instances
used in the training phase have from 75 to 100 jobs, the average optimality gap
on instances with up to 800 jobs is 0.26%, which is almost five times less than
the gap of the state-of-the-art heuristic.
Related papers
- Taking the human out of decomposition-based optimization via artificial
intelligence: Part II. Learning to initialize [0.0]
The proposed approach can lead to a significant reduction in solution time.
Active and supervised learning is used to learn a surrogate model that predicts the computational performance.
The results show that the proposed approach can lead to a significant reduction in solution time.
arXiv Detail & Related papers (2023-10-10T23:49:26Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Algorithms that Approximate Data Removal: New Results and Limitations [2.6905021039717987]
We study the problem of deleting user data from machine learning models trained using empirical risk minimization.
We develop an online unlearning algorithm that is both computationally and memory efficient.
arXiv Detail & Related papers (2022-09-25T17:20:33Z) - Federated Learning with a Sampling Algorithm under Isoperimetry [9.990687944474738]
Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices.
We propose a communication-efficient variant of Langevinvin's sample a posteriori.
arXiv Detail & Related papers (2022-06-02T08:19:03Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Data-driven Algorithm for Scheduling with Total Tardiness [0.6606016007748989]
We investigate the use of deep learning for solving a classical NP-Hard single machine scheduling problem.
We have designed a regressor containing a deep neural network that learns and predicts the criterion of a given set of jobs.
Our data-driven approach can efficiently generalize information from the training phase to significantly larger instances.
arXiv Detail & Related papers (2020-05-12T07:16:43Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.