A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring
- URL: http://arxiv.org/abs/2508.17477v1
- Date: Sun, 24 Aug 2025 18:05:35 GMT
- Title: A Human-In-The-Loop Approach for Improving Fairness in Predictive Business Process Monitoring
- Authors: Martin Käppel, Julian Neuberger, Felix Möhrlein, Sven Weinzierl, Martin Matzner, Stefan Jablonski,
- Abstract summary: Predictive process monitoring enables organizations to proactively react and intervene in running instances of a business process.<n>The data-driven nature of these models makes them susceptible to finding unfair, biased, or unethical patterns in the data.<n>This paper proposes a novel, model-agnostic approach for identifying and rectifying biased decisions in predictive business process monitoring models.
- Score: 1.6624933615451845
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Predictive process monitoring enables organizations to proactively react and intervene in running instances of a business process. Given an incomplete process instance, predictions about the outcome, next activity, or remaining time are created. This is done by powerful machine learning models, which have shown impressive predictive performance. However, the data-driven nature of these models makes them susceptible to finding unfair, biased, or unethical patterns in the data. Such patterns lead to biased predictions based on so-called sensitive attributes, such as the gender or age of process participants. Previous work has identified this problem and offered solutions that mitigate biases by removing sensitive attributes entirely from the process instance. However, sensitive attributes can be used both fairly and unfairly in the same process instance. For example, during a medical process, treatment decisions could be based on gender, while the decision to accept a patient should not be based on gender. This paper proposes a novel, model-agnostic approach for identifying and rectifying biased decisions in predictive business process monitoring models, even when the same sensitive attribute is used both fairly and unfairly. The proposed approach uses a human-in-the-loop approach to differentiate between fair and unfair decisions through simple alterations on a decision tree model distilled from the original prediction model. Our results show that the proposed approach achieves a promising tradeoff between fairness and accuracy in the presence of biased data. All source code and data are publicly available at https://doi.org/10.5281/zenodo.15387576.
Related papers
- FairLoop: Software Support for Human-Centric Fairness in Predictive Business Process Monitoring [1.5831073048826505]
We present FairLoop1, a tool for human-guided bias mitigation in neural network-based prediction models.<n>FairLoop distills decision trees from neural networks, allowing users to inspect and modify unfair decision logic.<n>It addresses the influence of sensitive attributes selectively rather than excluding them uniformly.
arXiv Detail & Related papers (2025-08-27T16:30:30Z) - Achieving Group Fairness through Independence in Predictive Process Monitoring [0.0]
Predictive process monitoring focuses on forecasting future states of ongoing process executions, such as predicting the outcome of a particular case.<n>In recent years, the application of machine learning models in this domain has garnered significant scientific attention.<n>This work addresses group fairness in predictive process monitoring by investigating independence, i.e. ensuring predictions are unaffected by sensitive group membership.
arXiv Detail & Related papers (2024-12-06T10:10:47Z) - Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Debiasing Machine Learning Models by Using Weakly Supervised Learning [3.3298048942057523]
We tackle the problem of bias mitigation of algorithmic decisions in a setting where both the output of the algorithm and the sensitive variable are continuous.
Typical examples are unfair decisions made with respect to the age or the financial status.
Our bias mitigation strategy is a weakly supervised learning method which requires that a small portion of the data can be measured in a fair manner.
arXiv Detail & Related papers (2024-02-23T18:11:32Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Automatically Reconciling the Trade-off between Prediction Accuracy and
Earliness in Prescriptive Business Process Monitoring [0.802904964931021]
We focus on the problem of automatically reconciling the trade-off between prediction accuracy and prediction earliness.
Different approaches were presented in the literature to reconcile the trade-off between prediction accuracy and earliness.
We perform a comparative evaluation of the main alternative approaches for reconciling the trade-off between prediction accuracy and earliness.
arXiv Detail & Related papers (2023-07-12T06:07:53Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Balancing out Bias: Achieving Fairness Through Training Reweighting [58.201275105195485]
Bias in natural language processing arises from models learning characteristics of the author such as gender and race.
Existing methods for mitigating and measuring bias do not directly account for correlations between author demographics and linguistic variables.
This paper introduces a very simple but highly effective method for countering bias using instance reweighting.
arXiv Detail & Related papers (2021-09-16T23:40:28Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.