Data-driven Algorithm for Scheduling with Total Tardiness
- URL: http://arxiv.org/abs/2005.05579v1
- Date: Tue, 12 May 2020 07:16:43 GMT
- Title: Data-driven Algorithm for Scheduling with Total Tardiness
- Authors: Michal Bou\v{s}ka, Anton\'in Nov\'ak, P\v{r}emysl \v{S}\r{u}cha,
Istv\'an M\'odos, and Zden\v{e}k Hanz\'alek
- Abstract summary: We investigate the use of deep learning for solving a classical NP-Hard single machine scheduling problem.
We have designed a regressor containing a deep neural network that learns and predicts the criterion of a given set of jobs.
Our data-driven approach can efficiently generalize information from the training phase to significantly larger instances.
- Score: 0.6606016007748989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate the use of deep learning for solving a
classical NP-Hard single machine scheduling problem where the criterion is to
minimize the total tardiness. Instead of designing an end-to-end machine
learning model, we utilize well known decomposition of the problem and we
enhance it with a data-driven approach. We have designed a regressor containing
a deep neural network that learns and predicts the criterion of a given set of
jobs. The network acts as a polynomial-time estimator of the criterion that is
used in a single-pass scheduling algorithm based on Lawler's decomposition
theorem. Essentially, the regressor guides the algorithm to select the best
position for each job. The experimental results show that our data-driven
approach can efficiently generalize information from the training phase to
significantly larger instances (up to 350 jobs) where it achieves an optimality
gap of about 0.5%, which is four times less than the gap of the
state-of-the-art NBR heuristic.
Related papers
- Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness [0.0]
We propose a deep neural network that acts as a decomposition-time estimator of the criterion value used in a single-pass scheduling algorithm.
We show that our machine learning-driven approach can efficiently generalize information from the training phase to significantly larger instances.
arXiv Detail & Related papers (2024-02-19T15:34:09Z) - Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Algorithms that Approximate Data Removal: New Results and Limitations [2.6905021039717987]
We study the problem of deleting user data from machine learning models trained using empirical risk minimization.
We develop an online unlearning algorithm that is both computationally and memory efficient.
arXiv Detail & Related papers (2022-09-25T17:20:33Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Continual Learning using a Bayesian Nonparametric Dictionary of Weight
Factors [75.58555462743585]
Naively trained neural networks tend to experience catastrophic forgetting in sequential task settings.
We propose a principled nonparametric approach based on the Indian Buffet Process (IBP) prior, letting the data determine how much to expand the model complexity.
We demonstrate the effectiveness of our method on a number of continual learning benchmarks and analyze how weight factors are allocated and reused throughout the training.
arXiv Detail & Related papers (2020-04-21T15:20:19Z) - Fast local linear regression with anchor regularization [21.739281173516247]
We propose a simple yet effective local model training algorithm called the fast anchor regularized local linear method (FALL)
Through experiments on synthetic and real-world datasets, we demonstrate that FALL compares favorably in terms of accuracy with the state-of-the-art network Lasso algorithm.
arXiv Detail & Related papers (2020-02-21T10:03:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.