"A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity?
- URL: http://arxiv.org/abs/2012.00874v1
- Date: Tue, 1 Dec 2020 22:36:54 GMT
- Title: "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity?
- Authors: Allison Woodruff and Yasmin Asare Anderson and Katherine Jameson
Armstrong and Marina Gkiza and Jay Jennings and Christopher Moessner and
Fernanda Viegas and Martin Wattenberg and and Lynette Webb and Fabian Wrede
and Patrick Gage Kelley
- Abstract summary: We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
- Score: 47.36687555570123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic systems are increasingly deployed to make decisions in many areas
of people's lives. The shift from human to algorithmic decision-making has been
accompanied by concern about potentially opaque decisions that are not aligned
with social values, as well as proposed remedies such as explainability. We
present results of a qualitative study of algorithmic decision-making,
comprised of five workshops conducted with a total of 60 participants in
Finland, Germany, the United Kingdom, and the United States. We invited
participants to reason about decision-making qualities such as explainability
and accuracy in a variety of domains. Participants viewed AI as a
decision-maker that follows rigid criteria and performs mechanical tasks well,
but is largely incapable of subjective or morally complex judgments. We discuss
participants' consideration of humanity in decision-making, and introduce the
concept of 'negotiability,' the ability to go beyond formal criteria and work
flexibly around the system.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Decision Theoretic Foundations for Experiments Evaluating Human Decisions [18.27590643693167]
We argue that to attribute loss in human performance to forms of bias, an experiment must provide participants with the information that a rational agent would need to identify the utility-maximizing decision.
As a demonstration, we evaluate the extent to which recent evaluations of decision-making from the literature on AI-assisted decisions achieve these criteria.
arXiv Detail & Related papers (2024-01-25T16:21:37Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.
Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Human-Centric Perspective on Fairness and Transparency in Algorithmic
Decision-Making [0.0]
Automated decision systems (ADS) are increasingly used for consequential decision-making.
Non-transparent systems are prone to yield unfair outcomes because their sanity is challenging to assess and calibrate.
I aim to make the following three main contributions through my doctoral thesis.
arXiv Detail & Related papers (2022-04-29T18:31:04Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Conceptualising Contestability: Perspectives on Contesting Algorithmic
Decisions [18.155121103400333]
We describe and analyse the perspectives of people and organisations who made submissions in response to Australia's proposed AI Ethics Framework'
Our findings reveal that while the nature of contestability is disputed, it is seen as a way to protect individuals, and it resembles contestability in relation to human decision-making.
arXiv Detail & Related papers (2021-02-23T05:13:18Z) - Contestable Black Boxes [10.552465253379134]
This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
arXiv Detail & Related papers (2020-06-09T09:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.