A Universal Approach to Feature Representation in Dynamic Task Assignment Problems
- URL: http://arxiv.org/abs/2507.03579v1
- Date: Fri, 04 Jul 2025 13:48:28 GMT
- Title: A Universal Approach to Feature Representation in Dynamic Task Assignment Problems
- Authors: Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, Willem van Jaarsveld,
- Abstract summary: Deep Reinforcement Learning (DRL) has been proposed as the state of the art for solving assignment problems.<n>This paper proposes a method for representing and solving assignment problems with infinite state and action spaces.<n>Experiments show that the method is suitable for representing and learning close-to-optimal task assignment policies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic task assignment concerns the optimal assignment of resources to tasks in a business process. Recently, Deep Reinforcement Learning (DRL) has been proposed as the state of the art for solving assignment problems. DRL methods usually employ a neural network (NN) as an approximator for the policy function, which ingests the state of the process and outputs a valuation of the possible assignments. However, representing the state and the possible assignments so that they can serve as inputs and outputs for a policy NN remains an open challenge, especially when tasks or resources have features with an infinite number of possible values. To solve this problem, this paper proposes a method for representing and solving assignment problems with infinite state and action spaces. In doing so, it provides three contributions: (I) A graph-based feature representation of assignment problems, which we call assignment graph; (II) A mapping from marked Colored Petri Nets to assignment graphs; (III) An adaptation of the Proximal Policy Optimization algorithm that can learn to solve assignment problems represented through assignment graphs. To evaluate the proposed representation method, we model three archetypal assignment problems ranging from finite to infinite state and action space dimensionalities. The experiments show that the method is suitable for representing and learning close-to-optimal task assignment policies regardless of the state and action space dimensionalities.
Related papers
- Zero-Shot Whole-Body Humanoid Control via Behavioral Foundation Models [71.34520793462069]
Unsupervised reinforcement learning (RL) aims at pre-training agents that can solve a wide range of downstream tasks in complex environments.<n>We introduce a novel algorithm regularizing unsupervised RL towards imitating trajectories from unlabeled behavior datasets.<n>We demonstrate the effectiveness of this new approach in a challenging humanoid control problem.
arXiv Detail & Related papers (2025-04-15T10:41:11Z) - Causally Aligned Curriculum Learning [69.11672390876763]
This paper studies the problem of curriculum RL through causal lenses.<n>We derive a sufficient graphical condition characterizing causally aligned source tasks.<n>We develop an efficient algorithm to generate a causally aligned curriculum.
arXiv Detail & Related papers (2025-03-21T02:20:38Z) - Learning Task Representations from In-Context Learning [73.72066284711462]
Large language models (LLMs) have demonstrated remarkable proficiency in in-context learning.<n>We introduce an automated formulation for encoding task information in ICL prompts as a function of attention heads.<n>We show that our method's effectiveness stems from aligning the distribution of the last hidden state with that of an optimally performing in-context-learned model.
arXiv Detail & Related papers (2025-02-08T00:16:44Z) - Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - Natural Policy Gradient and Actor Critic Methods for Constrained Multi-Task Reinforcement Learning [13.908826484332282]
Multi-task reinforcement learning (RL) aims to find a single policy that effectively solves multiple tasks at the same time.
This paper presents a constrained formulation for multi-task RL where the goal is to maximize the average performance of the policy across tasks subject to bounds on the performance in each task.
arXiv Detail & Related papers (2024-05-03T19:43:30Z) - Offline Multi-task Transfer RL with Representational Penalization [26.114893629771736]
We study the problem of representation transfer in offline Reinforcement Learning (RL)
We propose an algorithm to compute pointwise uncertainty measures for the learnt representation.
arXiv Detail & Related papers (2024-02-19T21:52:44Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Action-Evolution Petri Nets: a Framework for Modeling and Solving
Dynamic Task Assignment Problems [0.0]
Action-Evolution Petri Nets (A-E PN) is a framework for modeling and solving dynamic task assignment problems.
A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies.
We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies.
arXiv Detail & Related papers (2023-06-05T14:14:48Z) - Task-Guided IRL in POMDPs that Scales [22.594913269327353]
In inverse linear reinforcement learning (IRL), a learning agent infers a reward function encoding the underlying task using demonstrations from experts.
Most IRL techniques require the computationally forward problem -- computing an optimal policy given a reward function -- in POMDPs.
We develop an algorithm that reduces the information while increasing the data efficiency.
arXiv Detail & Related papers (2022-12-30T21:08:57Z) - Low-Dimensional State and Action Representation Learning with MDP
Homomorphism Metrics [1.5293427903448022]
Deep Reinforcement Learning has shown its ability in solving complicated problems directly from high-dimensional observations.
In end-to-end settings, Reinforcement Learning algorithms are not sample-efficient and requires long training times and quantities of data.
We propose a framework for sample-efficient Reinforcement Learning that take advantage of state and action representations to transform a high-dimensional problem into a low-dimensional one.
arXiv Detail & Related papers (2021-07-04T16:26:04Z) - Goal Kernel Planning: Linearly-Solvable Non-Markovian Policies for Logical Tasks with Goal-Conditioned Options [54.40780660868349]
We introduce a compositional framework called Linearly-Solvable Goal Kernel Dynamic Programming (LS-GKDP)<n>LS-GKDP combines the Linearly-Solvable Markov Decision Process (LMDP) formalism with the Options Framework of Reinforcement Learning.<n>We show how an LMDP with a goal kernel enables the efficient optimization of meta-policies in a lower-dimensional subspace defined by the task grounding.
arXiv Detail & Related papers (2020-07-06T05:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.