A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making
- URL: http://arxiv.org/abs/2205.00033v1
- Date: Fri, 29 Apr 2022 18:31:04 GMT
- Title: A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making
- Authors: Jakob Schoeffer
- Abstract summary: Automated decision systems (ADS) are increasingly used for consequential decision-making.
Non-transparent systems are prone to yield unfair outcomes because their sanity is challenging to assess and calibrate.
I aim to make the following three main contributions through my doctoral thesis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated decision systems (ADS) are increasingly used for consequential
decision-making. These systems often rely on sophisticated yet opaque machine
learning models, which do not allow for understanding how a given decision was
arrived at. This is not only problematic from a legal perspective, but
non-transparent systems are also prone to yield unfair outcomes because their
sanity is challenging to assess and calibrate in the first place -- which is
particularly worrisome for human decision-subjects. Based on this observation
and building upon existing work, I aim to make the following three main
contributions through my doctoral thesis: (a) understand how (potential)
decision-subjects perceive algorithmic decisions (with varying degrees of
transparency of the underlying ADS), as compared to similar decisions made by
humans; (b) evaluate different tools for transparent decision-making with
respect to their effectiveness in enabling people to appropriately assess the
quality and fairness of ADS; and (c) develop human-understandable technical
artifacts for fair automated decision-making. Over the course of the first half
of my PhD program, I have already addressed substantial pieces of (a) and (c),
whereas (b) will be the major focus of the second half.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Influence of the algorithm's reliability and transparency in the user's
decision-making process [0.0]
We conduct an online empirical study with 61 participants to find out how the change in transparency and reliability of an algorithm could impact users' decision-making process.
The results indicate that people show at least moderate confidence in the decisions of the algorithm even when the reliability is bad.
arXiv Detail & Related papers (2023-07-13T03:13:49Z) - Algorithmic Decision-Making Safeguarded by Human Knowledge [8.482569811904028]
We study the augmentation of algorithmic decisions with human knowledge.
We show that when the algorithmic decision is optimal with large data, the non-data-driven human guardrail usually provides no benefit.
In these cases, even with sufficient data, the augmentation from human knowledge can still improve the performance of the algorithmic decision.
arXiv Detail & Related papers (2022-11-20T17:13:32Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - A Study on Fairness and Trust Perceptions in Automated Decision Making [0.0]
We evaluate different attempts of explaining automated decision systems with respect to their effect on people's perceptions of fairness and trustworthiness towards the underlying mechanisms.
A pilot study revealed surprising qualitative insights as well as preliminary significant effects, which will have to be verified, extended and thoroughly discussed in the larger main study.
arXiv Detail & Related papers (2021-03-08T13:57:31Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Automatic Discovery of Interpretable Planning Strategies [9.410583483182657]
We introduce AI-Interpret, a method for transforming idiosyncratic policies into simple and interpretable descriptions.
We show that prividing the decision rules generated by AI-Interpret as flowcharts significantly improved people's planning strategies and decisions.
arXiv Detail & Related papers (2020-05-24T12:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.