Uncertainty-aware predictive modeling for fair data-driven decisions
- URL: http://arxiv.org/abs/2211.02730v1
- Date: Fri, 4 Nov 2022 20:04:39 GMT
- Title: Uncertainty-aware predictive modeling for fair data-driven decisions
- Authors: Patrick Kaiser, Christoph Kern, David R\"ugamer
- Abstract summary: We show how fairML systems can be safeML systems.
For fair decisions, we argue that a safe fail option should be used for individuals with uncertain categorization.
- Score: 5.371337604556311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Both industry and academia have made considerable progress in developing
trustworthy and responsible machine learning (ML) systems. While critical
concepts like fairness and explainability are often addressed, the safety of
systems is typically not sufficiently taken into account. By viewing
data-driven decision systems as socio-technical systems, we draw on the
uncertainty in ML literature to show how fairML systems can also be safeML
systems. We posit that a fair model needs to be an uncertainty-aware model,
e.g. by drawing on distributional regression. For fair decisions, we argue that
a safe fail option should be used for individuals with uncertain
categorization. We introduce semi-structured deep distributional regression as
a modeling framework which addresses multiple concerns brought against standard
ML models and show its use in a real-world example of algorithmic profiling of
job seekers.
Related papers
- Uncertainty measurement for complex event prediction in safety-critical systems [0.36832029288386137]
Complex events processing (CEP) uncertainty is critical for embedded and safety-critical systems.
This paper exemplifies how we can measure uncertainty for the perception and prediction of events.
We present and discuss our results, which are very promising within our field of research and work.
arXiv Detail & Related papers (2024-11-02T15:51:37Z) - Large Language Models Must Be Taught to Know What They Don't Know [97.90008709512921]
We show that fine-tuning on a small dataset of correct and incorrect answers can create an uncertainty estimate with good generalization and small computational overhead.
We also investigate the mechanisms that enable reliable uncertainty estimation, finding that many models can be used as general-purpose uncertainty estimators.
arXiv Detail & Related papers (2024-06-12T16:41:31Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Dynamic Model Agnostic Reliability Evaluation of Machine-Learning
Methods Integrated in Instrumentation & Control Systems [1.8978726202765634]
Trustworthiness of datadriven neural network-based machine learning algorithms is not adequately assessed.
In recent reports by the National Institute for Standards and Technology, trustworthiness in ML is a critical barrier to adoption.
We demonstrate a real-time model-agnostic method to evaluate the relative reliability of ML predictions by incorporating out-of-distribution detection on the training dataset.
arXiv Detail & Related papers (2023-08-08T18:25:42Z) - A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty [0.4264192013842096]
This work proposes a set of class-independent meta-heuristics that can characterize the complexity of an instance in terms of factors are mutually relevant to both human and machine learning decision-making.
The proposed measures and framework hold promise for improving model development for more complex instances, as well as providing a new means of model abstention and explanation.
arXiv Detail & Related papers (2023-04-20T13:09:28Z) - Analyzing a Caching Model [7.378507865227209]
Interpretability remains a major obstacle for adoption in real-world deployments.
By analyzing a state-of-the-art caching model, we provide evidence that the model has learned concepts beyond simple statistics.
arXiv Detail & Related papers (2021-12-13T19:53:07Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.