Dimensions of Diversity in Human Perceptions of Algorithmic Fairness
- URL: http://arxiv.org/abs/2005.00808v3
- Date: Mon, 5 Sep 2022 13:34:43 GMT
- Title: Dimensions of Diversity in Human Perceptions of Algorithmic Fairness
- Authors: Nina Grgi\'c-Hla\v{c}a, Gabriel Lima, Adrian Weller, Elissa M.
Redmiles
- Abstract summary: We explore how people's perceptions of procedural algorithmic fairness relate to their demographics and personal experiences.
Political views and personal experience with the algorithmic decision context significantly influence perceptions about the fairness of using different features for bail decision-making.
- Score: 37.372078500394984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A growing number of oversight boards and regulatory bodies seek to monitor
and govern algorithms that make decisions about people's lives. Prior work has
explored how people believe algorithmic decisions should be made, but there is
little understanding of how individual factors like sociodemographics or direct
experience with a decision-making scenario may affect their ethical views. We
take a step toward filling this gap by exploring how people's perceptions of
one aspect of procedural algorithmic fairness (the fairness of using particular
features in an algorithmic decision) relate to their (i) demographics (age,
education, gender, race, political views) and (ii) personal experiences with
the algorithmic decision-making scenario. We find that political views and
personal experience with the algorithmic decision context significantly
influence perceptions about the fairness of using different features for bail
decision-making. Drawing on our results, we discuss the implications for
stakeholder engagement and algorithmic oversight including the need to consider
multiple dimensions of diversity in composing oversight and regulatory bodies.
Related papers
- Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Influence of the algorithm's reliability and transparency in the user's
decision-making process [0.0]
We conduct an online empirical study with 61 participants to find out how the change in transparency and reliability of an algorithm could impact users' decision-making process.
The results indicate that people show at least moderate confidence in the decisions of the algorithm even when the reliability is bad.
arXiv Detail & Related papers (2023-07-13T03:13:49Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Crowdsourcing Impacts: Exploring the Utility of Crowds for Anticipating
Societal Impacts of Algorithmic Decision Making [7.068913546756094]
We employ crowdsourcing to uncover different types of impact areas based on a set of governmental algorithmic decision making tools.
Our findings suggest that this method is effective at leveraging the cognitive diversity of the crowd to uncover a range of issues.
arXiv Detail & Related papers (2022-07-19T19:46:53Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - Conceptualising Contestability: Perspectives on Contesting Algorithmic
Decisions [18.155121103400333]
We describe and analyse the perspectives of people and organisations who made submissions in response to Australia's proposed AI Ethics Framework'
Our findings reveal that while the nature of contestability is disputed, it is seen as a way to protect individuals, and it resembles contestability in relation to human decision-making.
arXiv Detail & Related papers (2021-02-23T05:13:18Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.