Effective and interpretable dispatching rules for dynamic job shops via
guided empirical learning
- URL: http://arxiv.org/abs/2109.03323v1
- Date: Tue, 7 Sep 2021 20:46:45 GMT
- Title: Effective and interpretable dispatching rules for dynamic job shops via
guided empirical learning
- Authors: Cristiane Ferreira, Gon\c{c}alo Figueira and Pedro Amorim
- Abstract summary: This paper is the first major attempt at combining machine learning with domain problem reasoning for scheduling.
We test our approach in the classical dynamic job shop scheduling problem minimising tardiness.
Results suggest that our approach was able to find new state-of-the-art rules, which significantly outperform the existing literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The emergence of Industry 4.0 is making production systems more flexible and
also more dynamic. In these settings, schedules often need to be adapted in
real-time by dispatching rules. Although substantial progress was made until
the '90s, the performance of these rules is still rather limited. The machine
learning literature is developing a variety of methods to improve them, but the
resulting rules are difficult to interpret and do not generalise well for a
wide range of settings. This paper is the first major attempt at combining
machine learning with domain problem reasoning for scheduling. The idea
consists of using the insights obtained with the latter to guide the empirical
search of the former. Our hypothesis is that this guided empirical learning
process should result in dispatching rules that are effective and interpretable
and which generalise well to different instance classes. We test our approach
in the classical dynamic job shop scheduling problem minimising tardiness,
which is one of the most well-studied scheduling problems. Nonetheless, results
suggest that our approach was able to find new state-of-the-art rules, which
significantly outperform the existing literature in the vast majority of
settings, from loose to tight due dates and from low utilisation conditions to
congested shops. Overall, the average improvement is 19%. Moreover, the rules
are compact, interpretable, and generalise well to extreme, unseen scenarios.
Related papers
- Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Job Shop Scheduling via Deep Reinforcement Learning: a Sequence to
Sequence approach [0.0]
This paper presents an end-to-end Deep Reinforcement Learning approach to scheduling that automatically learns dispatching rules.
We show that we outperform many classical approaches exploiting priority dispatching rules and show competitive results on state-of-the-art Deep Reinforcement Learning ones.
arXiv Detail & Related papers (2023-08-03T14:52:17Z) - Rule By Example: Harnessing Logical Rules for Explainable Hate Speech
Detection [13.772240348963303]
Rule By Example (RBE) is a novel-based contrastive learning approach for learning from logical rules for the task of textual content moderation.
RBE is capable of providing rule-grounded predictions, allowing for more explainable and customizable predictions compared to typical deep learning-based approaches.
arXiv Detail & Related papers (2023-07-24T16:55:37Z) - Improving Long-Horizon Imitation Through Instruction Prediction [93.47416552953075]
In this work, we explore the use of an often unused source of auxiliary supervision: language.
Inspired by recent advances in transformer-based models, we train agents with an instruction prediction loss that encourages learning temporally extended representations that operate at a high level of abstraction.
In further analysis we find that instruction modeling is most important for tasks that require complex reasoning, while understandably offering smaller gains in environments that require simple plans.
arXiv Detail & Related papers (2023-06-21T20:47:23Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Distilling Task-specific Logical Rules from Large Pre-trained Models [24.66436804853525]
We develop a novel framework to distill task-specific logical rules from large pre-trained models.
Specifically, we borrow recent prompt-based language models as the knowledge expert to yield initial seed rules.
Experiments on three public named entity tagging benchmarks demonstrate the effectiveness of our proposed framework.
arXiv Detail & Related papers (2022-10-06T09:12:18Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - From Examples to Rules: Neural Guided Rule Synthesis for Information
Extraction [17.126336368896666]
We adapt recent advances in program synthesis to information extraction, synthesizing rules from provided examples.
We show that without training the synthesis algorithm on the specific domain, our synthesized rules achieve state-of-the-art performance on the 1-shot scenario of a task that focuses on few-shot learning for relation classification, and competitive performance in the 5-shot scenario.
arXiv Detail & Related papers (2022-01-16T19:27:18Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z) - The Two Regimes of Deep Network Training [93.84309968956941]
We study the effects of different learning schedules and the appropriate way to select them.
To this end, we isolate two distinct phases, which we refer to as the "large-step regime" and the "small-step regime"
Our training algorithm can significantly simplify learning rate schedules.
arXiv Detail & Related papers (2020-02-24T17:08:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.