Action-Evolution Petri Nets: a Framework for Modeling and Solving
Dynamic Task Assignment Problems
- URL: http://arxiv.org/abs/2306.02910v3
- Date: Fri, 9 Jun 2023 09:36:22 GMT
- Title: Action-Evolution Petri Nets: a Framework for Modeling and Solving
Dynamic Task Assignment Problems
- Authors: Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, Willem van Jaarsveld
- Abstract summary: Action-Evolution Petri Nets (A-E PN) is a framework for modeling and solving dynamic task assignment problems.
A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies.
We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic task assignment involves assigning arriving tasks to a limited number
of resources in order to minimize the overall cost of the assignments. To
achieve optimal task assignment, it is necessary to model the assignment
problem first. While there exist separate formalisms, specifically Markov
Decision Processes and (Colored) Petri Nets, to model, execute, and solve
different aspects of the problem, there is no integrated modeling technique. To
address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a
framework for modeling and solving dynamic task assignment problems. A-E PN
provides a unified modeling technique that can represent all elements of
dynamic task assignment problems. Moreover, A-E PN models are executable, which
means they can be used to learn close-to-optimal assignment policies through
Reinforcement Learning (RL) without additional modeling effort. To evaluate the
framework, we define a taxonomy of archetypical assignment problems. We show
for three cases that A-E PN can be used to learn close-to-optimal assignment
policies. Our results suggest that A-E PN can be used to model and solve a
broad range of dynamic task assignment problems.
Related papers
- Model Evolution Framework with Genetic Algorithm for Multi-Task Reinforcement Learning [85.91908329457081]
Multi-task reinforcement learning employs a single policy to complete various tasks, aiming to develop an agent with generalizability across different scenarios.
Existing approaches typically use a routing network to generate specific routes for each task and reconstruct a set of modules into diverse models to complete multiple tasks simultaneously.
We propose a Model Evolution framework with Genetic Algorithm (MEGA), which enables the model to evolve during training according to the difficulty of the tasks.
arXiv Detail & Related papers (2025-02-19T09:22:34Z) - Towards Few-Shot Adaptation of Foundation Models via Multitask
Finetuning [20.727482935029375]
Foundation models have emerged as a powerful tool for many AI problems.
In this paper, we study the theoretical justification of a multitask finetuning approach.
We present results affirming our task selection algorithm adeptly chooses related finetuning tasks, providing advantages to the model performance on target tasks.
arXiv Detail & Related papers (2024-02-22T23:29:42Z) - Building Minimal and Reusable Causal State Abstractions for
Reinforcement Learning [63.58935783293342]
Causal Bisimulation Modeling (CBM) is a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction.
CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones.
arXiv Detail & Related papers (2024-01-23T05:43:15Z) - Multi-Objective Optimization for Sparse Deep Multi-Task Learning [0.0]
We present a Multi-Objective Optimization algorithm using a modified Weighted Chebyshev scalarization for training Deep Neural Networks (DNNs)
Our work aims to address the (economical and also ecological) sustainability issue of DNN models, with particular focus on Deep Multi-Task models.
arXiv Detail & Related papers (2023-08-23T16:42:27Z) - JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for
Multi-task Mathematical Problem Solving [77.51817534090789]
We propose textbfJiuZhang2.0, a unified Chinese PLM specially for multi-task mathematical problem solving.
Our idea is to maintain a moderate-sized model and employ the emphcross-task knowledge sharing to improve the model capacity in a multi-task setting.
arXiv Detail & Related papers (2023-06-19T15:45:36Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Controllable Dynamic Multi-Task Architectures [92.74372912009127]
We propose a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints.
We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights.
arXiv Detail & Related papers (2022-03-28T17:56:40Z) - Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning [65.268245109828]
In data-rich domains such as vision, language, and speech, deep learning prevails to deliver high-performance task-specific models.
Deep learning in resource-limited domains still faces multiple challenges including (i) limited data, (ii) constrained model development cost, and (iii) lack of adequate pre-trained models for effective finetuning.
Model reprogramming enables resource-efficient cross-domain machine learning by repurposing a well-developed pre-trained model from a source domain to solve tasks in a target domain without model finetuning.
arXiv Detail & Related papers (2022-02-22T02:33:54Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.