Prescriptive Process Monitoring Under Resource Constraints: A
Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2307.06564v2
- Date: Sat, 20 Jan 2024 12:49:02 GMT
- Title: Prescriptive Process Monitoring Under Resource Constraints: A
Reinforcement Learning Approach
- Authors: Mahmoud Shoush and Marlon Dumas
- Abstract summary: Reinforcement learning has been put forward as an approach to learning intervention policies through trial and error.
Existing approaches in this space assume that the number of resources available to perform interventions in a process is unlimited.
This paper argues that, in the presence of resource constraints, a key dilemma in the field of prescriptive process monitoring is to trigger interventions based not only on predictions of their necessity, timeliness, or effect but also on the uncertainty of these predictions and the level of resource utilization.
- Score: 0.3807314298073301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prescriptive process monitoring methods seek to optimize the performance of
business processes by triggering interventions at runtime, thereby increasing
the probability of positive case outcomes. These interventions are triggered
according to an intervention policy. Reinforcement learning has been put
forward as an approach to learning intervention policies through trial and
error. Existing approaches in this space assume that the number of resources
available to perform interventions in a process is unlimited, an unrealistic
assumption in practice. This paper argues that, in the presence of resource
constraints, a key dilemma in the field of prescriptive process monitoring is
to trigger interventions based not only on predictions of their necessity,
timeliness, or effect but also on the uncertainty of these predictions and the
level of resource utilization. Indeed, committing scarce resources to an
intervention when the necessity or effects of this intervention are highly
uncertain may intuitively lead to suboptimal intervention effects. Accordingly,
the paper proposes a reinforcement learning approach for prescriptive process
monitoring that leverages conformal prediction techniques to consider the
uncertainty of the predictions upon which an intervention decision is based. An
evaluation using real-life datasets demonstrates that explicitly modeling
uncertainty using conformal predictions helps reinforcement learning agents
converge towards policies with higher net intervention gain
Related papers
- Conformal Counterfactual Inference under Hidden Confounding [19.190396053530417]
Predicting potential outcomes along with its uncertainty in a counterfactual world poses the foundamental challenge in causal inference.
Existing methods that construct confidence intervals for counterfactuals either rely on the assumption of strong ignorability.
We propose a novel approach based on transductive weighted conformal prediction, which provides confidence intervals for counterfactual outcomes with marginal converage guarantees.
arXiv Detail & Related papers (2024-05-20T21:43:43Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Assessing the Impact of Context Inference Error and Partial
Observability on RL Methods for Just-In-Time Adaptive Interventions [12.762365585427377]
Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community.
JITAIs aim to provide the right type and amount of support by iteratively selecting a sequence of intervention options from a pre-defined set of components.
We study the effect of context inference error and partial observability on the ability to learn effective policies.
arXiv Detail & Related papers (2023-05-17T02:46:37Z) - Intervening With Confidence: Conformal Prescriptive Monitoring of
Business Processes [0.7487718119544158]
This paper proposes an approach to extend existing prescriptive process monitoring methods with predictions with confidence guarantees.
An empirical evaluation using real-life public datasets shows that conformal predictions enhance the net gain of prescriptive process monitoring methods under limited resources.
arXiv Detail & Related papers (2022-12-07T15:29:21Z) - Boosting the interpretability of clinical risk scores with intervention
predictions [59.22442473992704]
We propose a joint model of intervention policy and adverse event risk as a means to explicitly communicate the model's assumptions about future interventions.
We show how combining typical risk scores, such as the likelihood of mortality, with future intervention probability scores leads to more interpretable clinical predictions.
arXiv Detail & Related papers (2022-07-06T19:49:42Z) - When to intervene? Prescriptive Process Monitoring Under Uncertainty and
Resource Constraints [0.7487718119544158]
Prescriptive process monitoring approaches leverage historical data to prescribe runtime interventions that will likely prevent negative case outcomes or improve a process's performance.
Previous proposals in this field rely on intervention policies that consider only the current state of a given case.
This paper addresses these gaps by introducing a prescriptive process monitoring method that filters and ranks ongoing cases based on prediction scores, prediction uncertainty, and causal effect of the intervention, and triggers interventions to maximize a gain function.
arXiv Detail & Related papers (2022-06-15T18:18:33Z) - Prescriptive Process Monitoring: Quo Vadis? [64.39761523935613]
The paper studies existing methods in this field via a Systematic Literature Review ( SLR)
The SLR provides insights into challenges and areas for future research that could enhance the usefulness and applicability of prescriptive process monitoring methods.
arXiv Detail & Related papers (2021-12-03T08:06:24Z) - Prescriptive Process Monitoring Under Resource Constraints: A Causal
Inference Approach [0.9645196221785693]
Existing prescriptive process monitoring techniques assume that the number of interventions that may be triggered is unbounded.
This paper proposes a prescriptive process monitoring technique that triggers interventions to optimize a cost function under fixed resource constraints.
arXiv Detail & Related papers (2021-09-07T06:42:33Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.