Optimization of Activity Batching Policies in Business Processes
- URL: http://arxiv.org/abs/2507.15457v1
- Date: Mon, 21 Jul 2025 10:11:51 GMT
- Title: Optimization of Activity Batching Policies in Business Processes
- Authors: Orlenys López-Pintado, Jannis Rosenbaum, Marlon Dumas,
- Abstract summary: In business processes, activity refers to packing multiple activity instances for joint execution.<n>This paper addresses the problem of discovering policies that strike optimal trade-offs between waiting time, processing effort, and cost.
- Score: 0.28675177318965045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In business processes, activity batching refers to packing multiple activity instances for joint execution. Batching allows managers to trade off cost and processing effort against waiting time. Larger and less frequent batches may lower costs by reducing processing effort and amortizing fixed costs, but they create longer waiting times. In contrast, smaller and more frequent batches reduce waiting times but increase fixed costs and processing effort. A batching policy defines how activity instances are grouped into batches and when each batch is activated. This paper addresses the problem of discovering batching policies that strike optimal trade-offs between waiting time, processing effort, and cost. The paper proposes a Pareto optimization approach that starts from a given set (possibly empty) of activity batching policies and generates alternative policies for each batched activity via intervention heuristics. Each heuristic identifies an opportunity to improve an activity's batching policy with respect to a metric (waiting time, processing time, cost, or resource utilization) and an associated adjustment to the activity's batching policy (the intervention). The impact of each intervention is evaluated via simulation. The intervention heuristics are embedded in an optimization meta-heuristic that triggers interventions to iteratively update the Pareto front of the interventions identified so far. The paper considers three meta-heuristics: hill-climbing, simulated annealing, and reinforcement learning. An experimental evaluation compares the proposed approach based on intervention heuristics against the same (non-heuristic guided) meta-heuristics baseline regarding convergence, diversity, and cycle time gain of Pareto-optimal policies.
Related papers
- Haste Makes Waste: Evaluating Planning Abilities of LLMs for Efficient and Feasible Multitasking with Time Constraints Between Actions [56.88110850242265]
We present Recipe2Plan, a novel benchmark framework based on real-world cooking scenarios.<n>Unlike conventional benchmarks, Recipe2Plan challenges agents to optimize cooking time through parallel task execution.
arXiv Detail & Related papers (2025-03-04T03:27:02Z) - Self-Regulation and Requesting Interventions [63.5863047447313]
We propose an offline framework that trains a "helper" policy to request interventions.<n>We score optimal intervention timing with PRMs and train the helper model on these labeled trajectories.<n>This offline approach significantly reduces costly intervention calls during training.
arXiv Detail & Related papers (2025-02-07T00:06:17Z) - Active Fine-Tuning of Multi-Task Policies [54.65568433408307]
We propose AMF (Active Multi-task Fine-tuning) to maximize multi-task policy performance under a limited demonstration budget.<n>We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-10-07T13:26:36Z) - Towards Dynamic Feature Acquisition on Medical Time Series by Maximizing Conditional Mutual Information [11.882952809819855]
Knowing which of a time series to measure and when is a key task in medicine, wearables.
Inspired by conditional mutual information, we propose an approach to train acquirers end-to-end using only downstream loss.
arXiv Detail & Related papers (2024-07-18T11:54:34Z) - MAP: Low-compute Model Merging with Amortized Pareto Fronts via Quadratic Approximation [80.47072100963017]
We introduce a novel and low-compute algorithm, Model Merging with Amortized Pareto Front (MAP)<n>MAP efficiently identifies a set of scaling coefficients for merging multiple models, reflecting the trade-offs involved.<n>We also introduce Bayesian MAP for scenarios with a relatively low number of tasks and Nested MAP for situations with a high number of tasks, further reducing the computational cost of evaluation.
arXiv Detail & Related papers (2024-06-11T17:55:25Z) - Off-Policy Evaluation for Large Action Spaces via Policy Convolution [60.6953713877886]
Policy Convolution family of estimators uses latent structure within actions to strategically convolve the logging and target policies.
Experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC.
arXiv Detail & Related papers (2023-10-24T01:00:01Z) - Learning When to Treat Business Processes: Prescriptive Process
Monitoring with Causal Inference and Reinforcement Learning [0.8318686824572804]
Increasing the success rate of a process, i.e. the percentage of cases that end in a positive outcome, is a recurrent process improvement goal.
This paper presents a prescriptive monitoring method that automates the decision-making task.
The method combines causal inference and reinforcement learning to learn treatment policies that maximize the net gain.
arXiv Detail & Related papers (2023-03-07T00:46:04Z) - Planning Multiple Epidemic Interventions with Reinforcement Learning [7.51289645756884]
An optimal plan will curb an epidemic with minimal loss of life, disease burden, and economic cost.
Finding an optimal plan is an intractable computational problem in realistic settings.
We apply state-of-the-art actor-critic reinforcement learning algorithms to search for plans that minimize overall costs.
arXiv Detail & Related papers (2023-01-30T11:51:24Z) - Prescriptive Process Monitoring Under Resource Constraints: A Causal
Inference Approach [0.9645196221785693]
Existing prescriptive process monitoring techniques assume that the number of interventions that may be triggered is unbounded.
This paper proposes a prescriptive process monitoring technique that triggers interventions to optimize a cost function under fixed resource constraints.
arXiv Detail & Related papers (2021-09-07T06:42:33Z) - Prescriptive Process Monitoring for Cost-Aware Cycle Time Reduction [0.7837881800517111]
This paper tackles the problem of determining if and when to trigger a time-reducing intervention in a way that maximizes the total net gain.
The paper proposes a prescriptive process monitoring method that uses random forest models to estimate the causal effect of triggering a time-reducing intervention.
arXiv Detail & Related papers (2021-05-15T01:19:04Z) - Kalman meets Bellman: Improving Policy Evaluation through Value Tracking [59.691919635037216]
Policy evaluation is a key process in Reinforcement Learning (RL)
We devise an optimization method, called Kalman Optimization for Value Approximation (KOVA)
KOVA minimizes a regularized objective function that concerns both parameter and noisy return uncertainties.
arXiv Detail & Related papers (2020-02-17T13:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.