Learning policies for resource allocation in business processes
- URL: http://arxiv.org/abs/2304.09970v2
- Date: Tue, 23 Jan 2024 11:36:51 GMT
- Title: Learning policies for resource allocation in business processes
- Authors: J. Middelhuis, R. Lo Bianco, E. Scherzer, Z. A. Bukhsh, I. J. B. F.
Adan, R. M. Dijkman
- Abstract summary: This paper proposes two learning-based methods for resource allocation in business processes.
The first method leverages Deep Reinforcement Learning (DRL) to learn near-optimal policies by taking action in the business process.
The second method is a score-based value approximation approach, which learns the weights of a set of curated features to prioritize resource assignments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient allocation of resources to activities is pivotal in executing
business processes but remains challenging. While resource allocation
methodologies are well-established in domains like manufacturing, their
application within business process management remains limited. Existing
methods often do not scale well to large processes with numerous activities or
optimize across multiple cases. This paper aims to address this gap by
proposing two learning-based methods for resource allocation in business
processes. The first method leverages Deep Reinforcement Learning (DRL) to
learn near-optimal policies by taking action in the business process. The
second method is a score-based value function approximation approach, which
learns the weights of a set of curated features to prioritize resource
assignments. To evaluate the proposed approaches, we first designed six
distinct business processes with archetypal process flows and characteristics.
These business processes were then connected to form three realistically sized
business processes. We benchmarked our methods against traditional heuristics
and existing resource allocation methods. The results show that our methods
learn adaptive resource allocation policies that outperform or are competitive
with the benchmarks in five out of six individual business processes. The DRL
approach outperforms all benchmarks in all three composite business processes
and finds a policy that is, on average, 13.1% better than the best-performing
benchmark.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Multi-Output Distributional Fairness via Post-Processing [47.94071156898198]
We introduce a post-processing method for multi-output models to enhance a model's distributional parity, a task-agnostic fairness measure.
Our method employs an optimal transport mapping to move a model's outputs across different groups towards their empirical Wasserstein barycenter.
arXiv Detail & Related papers (2024-08-31T22:41:26Z) - On Speeding Up Language Model Evaluation [48.51924035873411]
Development of prompt-based methods with Large Language Models (LLMs) requires making numerous decisions.
We propose a novel method to address this challenge.
We show that it can identify the top-performing method using only 5-15% of the typically needed resources.
arXiv Detail & Related papers (2024-07-08T17:48:42Z) - Recommending the optimal policy by learning to act from temporal data [2.554326189662943]
This paper proposes an AI based approach that learns, by means of Reinforcement (RL)
The approach is validated on real and synthetic datasets and compared with off-policy Deep RL approaches.
The ability of our approach to compare with, and often overcome, Deep RL approaches provides a contribution towards the exploitation of white box RL techniques in scenarios where only temporal execution data are available.
arXiv Detail & Related papers (2023-03-16T10:30:36Z) - A Novel Approach for Auto-Formulation of Optimization Problems [66.94228200699997]
In the Natural Language for Optimization (NL4Opt) NeurIPS 2022 competition, competitors focus on improving the accessibility and usability of optimization solvers.
In this paper, we present the solution of our team.
Our proposed methods have achieved the F1-score of 0.931 in subtask 1 and the accuracy of 0.867 in subtask 2, which won the fourth and third places respectively in this competition.
arXiv Detail & Related papers (2023-02-09T13:57:06Z) - Exploration via Planning for Information about the Optimal Trajectory [67.33886176127578]
We develop a method that allows us to plan for exploration while taking the task and the current knowledge into account.
We demonstrate that our method learns strong policies with 2x fewer samples than strong exploration baselines.
arXiv Detail & Related papers (2022-10-06T20:28:55Z) - Distributional Reinforcement Learning for Scheduling of (Bio)chemical
Production Processes [0.0]
Reinforcement Learning (RL) has recently received significant attention from the process systems engineering and control communities.
We present a RL methodology to address precedence and disjunctive constraints as commonly imposed on production scheduling problems.
arXiv Detail & Related papers (2022-03-01T17:25:40Z) - Math Programming based Reinforcement Learning for Multi-Echelon
Inventory Management [1.9161790404101895]
Reinforcement learning has lead to considerable break-throughs in diverse areas such as robotics, games and many others.
But the application to RL in complex real-world decision making problems remains limited.
These characteristics make the problem considerably harder to solve for existing RL methods that rely on enumeration techniques to solve per step action problems.
We show that a properly selected discretization of the underlying uncertain distribution can yield near optimal actor policy even with very few samples from the underlying uncertainty.
We find that PARL outperforms commonly used base stock by 44.7% and the best performing RL method by up to 12.1% on average
arXiv Detail & Related papers (2021-12-04T01:40:34Z) - FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural
Language Understanding [89.92513889132825]
We introduce an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability.
We open-source our toolkit, FewNLU, that implements our evaluation framework along with a number of state-of-the-art methods.
arXiv Detail & Related papers (2021-09-27T00:57:30Z) - Automatic Resource Allocation in Business Processes: A Systematic Literature Survey [0.0699049312989311]
Resource allocation is a complex decision-making problem with high impact on the effectiveness and efficiency of processes.
A wide range of approaches was developed to support research allocation automatically.
arXiv Detail & Related papers (2021-07-15T11:40:20Z) - Deep Reinforcement Learning for Resource Allocation in Business
Processes [3.0938904602244355]
We propose a novel representation that allows modeling of a multi-process environment with different process-based rewards.
We then use double deep reinforcement learning to look for optimal resource allocation policy.
Deep reinforcement learning based resource allocation achieved significantly better results than two commonly used techniques.
arXiv Detail & Related papers (2021-03-29T11:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.