A Procedural Framework for Assessing the Desirability of Process Deviations
- URL: http://arxiv.org/abs/2506.11525v1
- Date: Fri, 13 Jun 2025 07:24:57 GMT
- Title: A Procedural Framework for Assessing the Desirability of Process Deviations
- Authors: Michael Grohs, Nadine Cordes, Jana-Rebecca Rehse,
- Abstract summary: This paper presents a procedural framework to guide process analysts in systematically assessing deviation desirability.<n>It provides a step-by-step approach for identifying which input factors to consider in what order to categorize deviations into mutually exclusive desirability categories.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conformance checking techniques help process analysts to identify where and how process executions deviate from a process model. However, they cannot determine the desirability of these deviations, i.e., whether they are problematic, acceptable or even beneficial for the process. Such desirability assessments are crucial to derive actions, but process analysts typically conduct them in a manual, ad-hoc way, which can be time-consuming, subjective, and irreplicable. To address this problem, this paper presents a procedural framework to guide process analysts in systematically assessing deviation desirability. It provides a step-by-step approach for identifying which input factors to consider in what order to categorize deviations into mutually exclusive desirability categories, each linked to action recommendations. The framework is based on a review and conceptualization of existing literature on deviation desirability, which is complemented by empirical insights from interviews with process analysis practitioners and researchers. We evaluate the framework through a desirability assessment task conducted with practitioners, indicating that the framework effectively enables them to streamline the assessment for a thorough yet concise evaluation.
Related papers
- A Task Taxonomy for Conformance Checking [0.2153887489636259]
Conformance checking is a sub-discipline of process mining, which compares observed process traces with a process model to analyze whether the process execution conforms with or deviates from the process design.<n>Current tools offer a wide variety of visual representations for conformance checking, but the analytical purposes they serve often remain unclear.<n>We propose a task taxonomy, which categorizes the tasks that can occur when conducting conformance checking analyses.
arXiv Detail & Related papers (2025-07-16T07:18:29Z) - Measurement to Meaning: A Validity-Centered Framework for AI Evaluation [12.55408229639344]
We provide a structured approach for reasoning about the types of evaluative claims that can be made given the available evidence.<n>Our framework is well-suited for the contemporary paradigm in machine learning.
arXiv Detail & Related papers (2025-05-13T20:36:22Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving aspects from a corpus of peer reviews.<n>We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - FRAPPE: A Group Fairness Framework for Post-Processing Everything [48.57876348370417]
We propose a framework that turns any regularized in-processing method into a post-processing approach.
We show theoretically and through experiments that our framework preserves the good fairness-error trade-offs achieved with in-processing.
arXiv Detail & Related papers (2023-12-05T09:09:21Z) - The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
Estimators with MetaQuantus [10.135749005469686]
One of the unsolved challenges in the field of Explainable AI (XAI) is determining how to most reliably estimate the quality of an explanation method.
We address this issue through a meta-evaluation of different quality estimators in XAI.
Our novel framework, MetaQuantus, analyses two complementary performance characteristics of a quality estimator.
arXiv Detail & Related papers (2023-02-14T18:59:02Z) - An Explainable Decision Support System for Predictive Process Analytics [0.41562334038629595]
This paper proposes a predictive analytics framework that is also equipped with explanation capabilities based on the game theory of Shapley Values.
The framework has been implemented in the IBM Process Mining suite and commercialized for business users.
arXiv Detail & Related papers (2022-07-26T09:55:49Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Prescriptive Process Monitoring: Quo Vadis? [64.39761523935613]
The paper studies existing methods in this field via a Systematic Literature Review ( SLR)
The SLR provides insights into challenges and areas for future research that could enhance the usefulness and applicability of prescriptive process monitoring methods.
arXiv Detail & Related papers (2021-12-03T08:06:24Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.